In my experience over the years, I’ve heard one phrase consistently used by developers across different organisations in order to try and mitigate risk associated with any one or more deliverables due for test team. These particular deliverables tend to be for specific issues such as a critical bug that needs to be addressed in order for testing to continue, or to satisfy a particular business need that has been requested at the 11th hour.
Naturally, there’s a time pressure aspect to all this, so unit & integration testing of the deliverables may have been fast tracked or even omitted altogether. However, the phase I hear again and again is:
“Don’t worry, Del. It’s only a one line change!”
At first, I was astonished that so many developers resort to the same simplistic method of assessing risk with their code changes, but then I’ve also seen the same principle employed on a wider scale to assess the risk of complete subsystem at end-of-project checkpoints. It’s one of many meaningless metrics presented to the stakeholders and labelled ‘code churn’. The bizarre thing about code churn is not just the metric itself (or indeed why people place so much importance on it), but more the binary nature of the stakeholder’s reaction (technical and non-technical, alike). The reactions I’ve witnessed are as follows:
Reaction 1: A subsystem with a code base of 5,000 lines has a churn of 500 lines (10%).
Stakeholders nod sagely, secure in the knowledge that any risk is minimal.
The figure is low.
This is good.
Reaction 2: A subsystem with a code base of 5,000 lines has a churn of 2,500 lines or more (>=50%).
Stakeholders collectively have a sharp intake of breath. They look at each other in distress.
The figure is high.
This is bad.
Even more absurdly, the tension from the second reaction is quashed by some equally flawed statistical mitigation from the subsystem owner. Rarely have I ever heard stakeholders say “well, let’s see what the testing of that subsystem revealed” and put their reaction on hold. Testing is after all, about “gathering information with the intention of informing a decision” (Jerry Weinberg).
Anyway, to get back to the developers and their one line change. I counter their statement in an attempt to adjust their thinking by suggesting the following scenarios:
Would a passenger be willing to fly in an aircraft where the maintenance engineer had told them
“Don’t worry, the aircraft is only missing one bolt!”?
Would a heart bypass patient be comforted any more when the surgeon had told them
“Don’t worry, I’m only going to make one incision!”?
Trying to equate risk with a numerical value of any kind is obviously foolhardy, yet the practice remains widespread throughout our industry. We must endeavour to educate developers and stakeholders alike with meaningful information gathered from testing so they can start to see the real risks and issues and break free of these numerical shackles.