What's more important: Implementability or Testability? | Cypress Semiconductor
What's more important: Implementability or Testability?
Executive summary: Testability is more important than implementability.
In a previous post (generated in response to my back and forth with Jack Ganssle on requirements) I listed my rules on what constitutes "good" requirements, which as a review are:
1) non-ambiguous identification (which I assert is gained by using "shall" only in a statement that is a requirement), and
2) the statement is implementable and testable.
Of course, usually different people are involved in testing and implementing, so #2 should be split into 2a (implementable) and 2b (testable). NOTE these are BOTH needed. A requirement "stinks" if it is either not implementable or not testable. BUT is one more important than the other? ABSOLUTELY
A friend (thanks Dennis!) pointed out a classic case of product-induced accidents that highlighted the dangers of software control of safety-critical systems - the Therac-25 radiation therapy machine. The following description has been excerpted, with only minor edits, from Wikipedia (en.wikipedia.org/wiki/Therac-25).
"The Therac-25 was a radiation therapy machine produced by Atomic Energy of Canada Limited (AECL). It was involved with at least six accidents between 1985 and 1987, in which patients were given massive overdoses of radiation, approximately 100 times the intended dose.
"The machine offered two modes of radiation therapy:
1) Direct electron-beam therapy, which delivered low doses of high-energy (5 MeV to 25 MeV) electrons over short periods of time, and
2) Megavolt X-ray therapy, which delivered X-rays produced by colliding high-energy (25 MeV) electrons into a "target".
"When operating in direct electron-beam therapy mode, a low-powered electron beam was emitted directly from the machine, then spread to safe concentration using scanning magnets. When operating in megavolt X-ray mode, the machine was designed to rotate four components into the path of the electron beam: a target, which converted the electron beam into X-rays; a flattening filter, which spread the beam out over a larger area; a set of movable blocks (also called a collimator), which shaped the X-ray beam; and an X-ray ion chamber, which measured the strength of the beam.
"The accidents occurred when the high-power electron beam was activated instead of the intended low power beam, and without the beam spreader plate rotated into place. The machine's software did not detect that this had occurred, and therefore did not prevent the patient from receiving a potentially lethal dose of radiation. The high-powered electron beam struck the patients with approximately 100 times the intended dose of radiation, causing a feeling described by a patient as "an intense electric shock". It caused him to scream and run out of the treatment room. Several days later, radiation burns appeared and the patients showed the symptoms of radiation poisoning. In three cases, the injured patients later died from radiation poisoning." (end of wikipedia excerpt)
The conclusions of a safety review commission showed that although there were several coding errors found, the root cause of the failures was the design, and more specifically that the design made it "relatively impossible to test is a clean automated way". (from same wikipedia article)
In order to ensure a quality product, it must be tested, and the extent to which it can be tested will directly impact the quality. The goal is to find "all" defects (defined as 99%, or 99.9%, or 99.99%, etc. as required by the criticality of the system) before shipping to customers - for safety- or mission-critical systems the implications of "defect escapes" can be catastrophic.
So back to requirements, how does this impact our requirements writing? I have a recent project's experience fresh in my mind and have formed my own opinion. I believe that the testability, test planning and definition of the test system are MORE important than the implementation. Of course, the implementation is important, and a poor implementation will lead to a poor product, but you need to have high confidence that the testing can and will find out whether the implementation is good or bad.
So extensively review the requirements from a testing-point-of-view (best if the person/team responsible for testing does this) and go as far as defining or designing the test system required. The key benefit of having the actual test system up and running and available during the project implementation is that both the design and the test teams can take advantage of it. And when the test team begins to find defects, the design team can run the same tests and do their detailed debugging using the same environment.
Beware of "hidden" untestable requirements. Because for those the only test force you will have is a large (and possibly fleeting) customer base.