Three Laws of Robust Software Systems

Murphy’s Law (“If anything can go wrong, it will”) was born at Edwards Air Force Base in 1949 at North Base. It was named after Capt. Edward A. Murphy, an engineer working on Air Force Project MX981, (a project) designed to see how much sudden deceleration a person can stand in a crash. One day, after finding that a transducer was wired wrong, he cursed the technician responsible and said, “If there is any way to do it wrong, he’ll find it.”

For that described reason it may be good to put some quality assurance process in place. I could also call this blog “the four laws of steady software quality”. It’s about some fundamental techniques that can help to achieve superior quality over a longer distance. This is particularly important if you’re developing some central component that will cause serious damage if it fails in production. OK, here is my (never final and not holistic) list of practical quality assurance tipps.

Law 1: facilitate change

There is nothing permanent except change. If a system isn’t designed in accordance to this superior important reality, then the probability of failure may increase above average. A widely used technique to facilitate change is the development of a sufficient set of unit tests. Unit testing enables to uncover regressions in existing functionality after changes have been made to a system. It also encourages to really think about the desired functionality and required design of the component under development.

Law 2: don’t rush through the functional testing phase

In economics, the marginal utility of a good is the gain (or loss) from an increase (or decrease) in the consumption of that good. The law of diminishing marginal utility says, that the marginal utility of each (homogenous) unit decreases as the supply of units increases (and vice versa). The first functional test cases often walk through the main scenarios covering the main paths of the considered software. All the code tested wasn’t executed before. These test cases have a very high marginal utility. Subsequent test cases may walk through the same code ranges except specific sidepaths at specific validation conditions for instance. These test cases may cover three or four additional lines of code in your application. As a result, they will have a smaller marginal utility then the first test cases.

My law about functional testing suggests: as long the execution of the next test case yields a significant utility the following applies: the more time you invest into testing the better the outcome! So don’t rush through a functional testing phase and miss out some useful test case (this assumes the special case in which usefulness can be quantified). Try to find the useful test cases that promise a significant gain in perceptible quality. On the other hand, if you’re executing test cases with a negative marginal utility you’re actually investing more effort then you gain in terms of perceptible quality. There is this special (but not uncommon) situation where the client does not run functional tests on systematic bases. This law will then suggest: the longer the application is in the test environment, the better the outcome.

Law 3: run (non-functional) benchmark tests

Another peace of good permanent software quality is a regular load test. To make results usable load tests need a defined steady environment and a baseline of measured values (a benchmark). These values are at least: CPU, response time, memory footprint. Load tests of new releases can be compared to those load tests of older releases. That way we can also bypass the often stated requirement that the load test environment needs to have the same capacity parameters then the production environment. In many cases it is possible to see the real big issues with a relatively small set of parallel users (e.g. 50 users).

It makes limited sense to do load testing if single user profiling results are bad. Therefore it’s a good idea to perform repeatable profiling test cases with every release. This way profiling results can be compared to each other (again: the benchmark idea). We do CPU and elapsed time profiling as well as memory profiling. Profiling is an activity that runs in parallel to actual development. It makes sence to focus on the main scenarios used regularly in production.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s