In my experience over the years, I’ve heard one phrase consistently used by developers across different organisations in order to try and mitigate risk associated with any one or more deliverables due for test team. These particular deliverables tend to be for specific issues such as a critical bug that needs to be addressed in order for testing to continue, or to satisfy a particular business need that has been requested at the 11th hour.
Naturally, there’s a time pressure aspect to all this, so unit & integration testing of the deliverables may have been fast tracked or even omitted altogether. However, the phase I hear again and again is:
“Don’t worry, Del. It’s only a one line change!”
At first, I was astonished that so many developers resort to the same simplistic method of assessing risk with their code changes, but then I’ve also seen the same principle employed on a wider scale to assess the risk of complete subsystem at end-of-project checkpoints. It’s one of many meaningless metrics presented to the stakeholders and labelled ‘code churn’. The bizarre thing about code churn is not just the metric itself (or indeed why people place so much importance on it), but more the binary nature of the stakeholder’s reaction (technical and non-technical, alike). The reactions I’ve witnessed are as follows:
Reaction 1: A subsystem with a code base of 5,000 lines has a churn of 500 lines (10%).
Stakeholders nod sagely, secure in the knowledge that any risk is minimal.
The figure is low.
This is good.
Reaction 2: A subsystem with a code base of 5,000 lines has a churn of 2,500 lines or more (>=50%).
Stakeholders collectively have a sharp intake of breath. They look at each other in distress.
The figure is high.
This is bad.
Even more absurdly, the tension from the second reaction is quashed by some equally flawed statistical mitigation from the subsystem owner. Rarely have I ever heard stakeholders say “well, let’s see what the testing of that subsystem revealed” and put their reaction on hold. Testing is after all, about “gathering information with the intention of informing a decision” (Jerry Weinberg).
Anyway, to get back to the developers and their one line change. I counter their statement in an attempt to adjust their thinking by suggesting the following scenarios:
Would a passenger be willing to fly in an aircraft where the maintenance engineer had told them
“Don’t worry, the aircraft is only missing one bolt!”?
Would a heart bypass patient be comforted any more when the surgeon had told them
“Don’t worry, I’m only going to make one incision!”?
Trying to equate risk with a numerical value of any kind is obviously foolhardy, yet the practice remains widespread throughout our industry. We must endeavour to educate developers and stakeholders alike with meaningful information gathered from testing so they can start to see the real risks and issues and break free of these numerical shackles.
Good post, Del.
I especially liked that you correlate the ‘just one line’ individual excuse with the collective view on risk based on ammount of code.
This collective error is not only more dangerous (in my view); but also harder to deal with; because inside a group clueless ideas may receive backup/strenght one from another.
On a 1 to 1 level with a programmer, often they easily notice the argument doesn’t hold:
Usually I ask “Would a one line ‘break;’ statement be harmless?” – and what if this statement is affected by a break statement somewhere else? “Would a one line ‘if’ condition be harmless?” – what if this statement is affected by (or affects) an if statement? “Aren’t one line conditional statements more dangerous than multiline ones?”…
It is easy to switch the conversation from the quantity of lines changed to the essence of the change. Some changes are safer than others, but that has nothing to do with size.
Michael Bolton teaches to be cautious of the words ‘only’, ‘just’ (like in “only one line”) etc. These words may be hiding a lot.
Thanks for the article!
Thanks for your comments.
You’re spot on – it’s definitely easier to talk this over with a few developers,who generally come round and acknowledge that size isn’t necessarily proportional to risk. Talking the stakeholders round at the project checkpoints is somewhat more challenging, especially when the company has mandated that such metrics speak volumes (ha ha, no pun intended!) about the product.
Nice to know I have a readership of one. 🙂
“Speaks volumes”… great one!
Can’t speak for other developers, but when I say it’s a 1-line change I’m not really counting the lines – it’s an assessment of the impact I think it’ll have – like adding some extra logging is a 1-line change even if I’ve added 200 lines of logging – lines of code aren’t a valid risk assessment – change of program flow is.
The real enemy of code quality is short- termist management who are desperate to ship something before it’s ready …
I agree with Al in that 1-line (or my preference “one-liner”) is more of a statement on the risk of the change.
Clearly there are dangerous one-liner changes out there in the realm of possibilities.
I also have used “one-liner” talk when trying to encourage peer reviewing, as if to say “this change is really easy to read”, which, again, does not imply low risk.
Anyhow I like the article, and this is just my two cents.