On the importance of software testing

As the famous programmer Jean-Paul Sartre once put it, hell is other people’s code. This is what echoes through your head when you’re jolted awake at 2AM by PagerDuty, blaring about a Sev0 production outage. You trawl through the changelog to find the offending commit: a missing null check that results in an exception. You start rolling back the bad deploy, but as you sit there, illuminated by the glow of your laptop screen, you curse to yourself: how did a simple error like this make it all the way to production?

We’ve all been in escalation situations like this, but perhaps just as many times, also been the author of the offending code change that caused the outage. During my time working on Hadoop, I’ve both written and fixed bugs like:

  • A new file format deserializer that would produce an empty result when reading a file written by the old serializer.
  • A rate limiter which would limit too aggressively by a factor of over 1000x.
  • A function that calculated how much data to flush to disk would, in almost every situation, not flush enough data.

In terms of complexity, these are obvious bugs that barely outrank the typical null pointer exceptions in sophistication, and should have been caught by even the most basic degree of testing. Fortunately, most of these examples were caught during our test cycle, but could have otherwise easily been Sev0 issues.

The case for testing is clear, but I’ve seen bug authors that never learn this lesson and (implicitly) refuse to write tests. Yes, there are times where skipping or deferring testing is acceptable. Yes, there are many nuanced arguments about the downsides of writing too many unit tests, the issues with mocking, and the uselessness of code coverage as a metric. But what really gets my goat is when a bug author’s simple apathy or lack of interest in testing results in a continuation of late-night pages, busted SLAs, and burned-out on-call engineers.

In this post, I present two case studies that illustrate our responsibility as software developers to deliver high-quality, production-ready artifacts for the consumers of our systems. In both of these studies, a catastrophic failure in a critical software system can be directly attributed to a lack of testing and poor quality assurance processes.

Therac-25

The Therac-25 was a medical radiation device used to treat cancer patients. It operated in two different treatment modes:

  • An electron mode which used an electron beam (beta radiation) to treat surface-level cancers.
  • An X-ray mode which turned that same electron beam into X-rays by increasing the current and pointing it at an X-ray target. This could be used to treat deeper tumors.

The Therac-25 was the latest machine in a series of radiotherapy machines. Previous models had hardware interlocks to prevent dangerous situations from happening, namely, operating the beam in high-current X-ray mode without the X-ray target in place.

However, the Therac-25 was the first to be entirely computer controlled. The manufacturer decided to depend entirely upon the control system to insure that this situation would not occur, and removed the hardware interlocks.

This was a fatal mistake. Due to a race condition, it was possible for the operator to accidentally configure the machine in X-ray mode without the X-ray target in place, delivering 100X the intended amount of radiation. Patients suffered horrible burns and radiation sickness, with three ultimately dying as a result of their injuries.

AECL, the manufacturer of the Therac-25, initially did not believe the complaints and delayed investigating the issue. Even after admitting the problem was real, the bug had to be independently reproduced by a hospital technician before AECL was able to develop a software patch. This patch should have been the end of it, but it turned out that the Therac-25 had yet another bug that manifested in the same fatal error. Another patient was killed before the machine was ultimately recalled.

The reason for this was directly pinned on poor software engineering practices. AECL did not have a formal software specification, test plan, or risk analysis for the Therac-25. Most of the coding was done by a single developer who simply carried forward the same code from the earlier Therac model with hardware interlocks. Furthermore, there was no independent testing or end-to-end testing done at all, with most testing happening internally on a hardware simulator.

There are a lot of resources to read more about the Therac-25. The original report on the Therac-25 by Nancy Leveson is great, as well as her 30 years later retrospective on the topic.

Mars Climate Orbiter

The spaceflight business is a risky one. These projects are huge engineering efforts that involve hundreds of millions or billions of dollars invested over a timespan of multiple years, with many agencies, contractors, and subcontractors involved. Even after all that, there’s also a surprisingly high chance that the rocket carrying your payload blows up on the launchpad.

NASA awarded the $125 million dollar Mars Climate Orbiter contract to Lockheed Martin. After four years (and 286 days in space), the Orbiter reached Mars and began a series of maneuvers for orbital insertion. However, the spacecraft entered Mars’ atmosphere much lower than expected and was destroyed.

The primary cause of failure was eventually found to be a software component that emitted calculations in Imperial units (pound-force seconds) while the invoker expected it to be in SI units (Newton-seconds), a factor of 4.45x difference. Although it’s tempting to attribute the issue to this seemingly simple bug, NASA ultimately placed the blame on multiple concurrent failures within their own testing and systems engineering processes.

A choice quote from the IEEE Spectrum article on this topic, which is highly recommended:

Thomas Gavin, deputy director for space and earth science at NASA’s Jet Propulsion Laboratory, added: “A single error should not bring down a $125 million mission.

Because of the rush to get the small forces model operational, the testing program had been abbreviated, Stephenson admitted. “Had we done end-to-end testing,” he stated at the press conference, “we believe this error would have been caught.” But the rushed and inadequate preparations left no time to do it right.

Other complaints about JPL go more directly to its existing style. One of Spectrum‘s chief sources for this story blamed that style on “JPL’s process of ‘cowboy’ programming, and their insistence on using 30-year-old trajectory code that can neither be run, seen, or verified by anyone or anything external to JPL.” He went on: “Sure, someone at Lockheed made a small error. If JPL did real software configuration and control, the error never would have gotten by the door.” Other sources commented that this problem was particularly severe within the JPL navigation team, rather than being a JPL-wide complaint.

So, should I test my software?

The lesson here is not that we need to apply the same software development processes as NASA or medical equipment manufacturers. Waterfall-style software development went out of style for a good reason, and it’s probably not that big a deal if your REST microservice goes down occasionally.

What is notable is that both of these failures were directly attributed to a lack of testing. Testing is both necessary and important when working on a large software project. Without good tests and QA processes in place, it’s nigh impossible to reason about the correctness of the system as a whole. Forgoing testing results in fragile products where even the simplest of bugs can result in catastrophic failure.

In a future post, I’ll dive more into the mechanics of software testing: the different types of tests, and how and when to apply them.

 

Special thanks to my wonderful editors Tiffany Chen, John Sherwood, and Michael Tao, who gave feedback on earlier drafts of this post.

Leave a Reply