There seems to be a particular word that crops up again and again in blogs, papers and presentations about software testing and beyond:
Honestly, it’s everywhere. Some examples:
Observational Bias (Darren McMillan, Requirements Analysis & Testing Traps)
Darren quite rightly points out the dangers of having visual references (wireframes) at a very early stage in the project lifecycle that could take your attention away from something more fundamental within the text of the requirements themselves.
Reporting Bias (Pete Houghton, Conspicuous in their Absence)
In this rather superbly written blog post, Pete highlights a way valuable information can be skewed to make a problem seem less severe (e.g. <1% of our customer base use *that* browser so can’t do XYZ). Can anyone claim that someone hasn’t tried to change their opinion by using an argument like this?
Survivorship Bias (Pete Houghton, Conspicuous in their Absence)
In the same post, Pete extends the examples of survivorship bias to advertising slogan you’ve probably seen dozens of times in a multitude of places.
Confirmation Bias (Michael Bolton, I wouldn’t have seen it if I hadn’t believed it: Confirmation Bias in Testing)
Perhaps this particular bias is the big daddy if all biases since it has so many variations – see slide number six. If you only looked at slide six then you may wonder how you could ever engage in any testing activity without suffering from one form of bias or another. Thankfully, Michael provides some really useful tips for escaping confirmation bias.
Anchoring Bias (Michael Kelly, Anchoring Bias)
In this rather odd piece, Michael talks about how he was struggling to come up with ideas about how to test a second iteration of software coming his way, but while walking up the stairs to a meeting he decided to simply sketch out a schematic of sorts and talk through his ideas (not necessarily solutions). There’s a heuristic to be found in there somewhere – “Talk it through with your mates” heuristic? I can appreciate this particular problem though and admit to being influenced by a little bit of anchoring bias in some projects many years ago.
Congruence Bias (Pete Houghton, The Arrogance of Regression Testing)
“We stop looking for problems that we don’t think are caused by the new changes.” claims Pete. How true. This has a particularly strong resonance as I believe it’s so easy for something like this to adjust our mindset if we’re not aware of it. Good post – go read.
Emotional Bias (Wikipedia)
I couldn’t find any specific examples of emotional bias for software testing, but it did prompt the question “How would your approach to testing change if you were asked to test something that you had a strong (adverse) emotional reaction to?” (Ignoring for now the obvious option of saying “I would find another job”). For example: a weapons targeting system, an adult entertainment website, etc. Perhaps this is more of an ethical bias?
And there are, many others such as Automation Bias, Assimilation Bias (nothing to do with the Borg, Star Trek fans), etc… So as you can see, there’s quite a few biases out there, and you may wonder how testers even get out of the starting blocks with so many possible ways for their judgement and work to be skewed.
That particular thought reminds me of a problem a well-known UK darts player had back in the late eighties. Eric Bristow was at the top of his game, had won the world title five times and was pretty much considered the Muhammad Ali of the darts world. However, he started having problems releasing the darts and was diagnosed as suffering from a condition known as Dartitis. The condition is believed to be psychologically tooted and this was the first time it had come to the public attention because Bristow was so well known.
I do wonder now whether it was some sort of bias that caused the condition. Taking into account the pressure of managing stress and expectations when playing at world events, could he possibly have been over-scrutinising one or many parts of his technique that somehow led to an unbalanced approach without him realising, and resulting in the physical manifestation of the condition?
In comparison, I wonder if there’s ever been a reported case of something called Testitis #1 – the psychologically rooted condition that stops testers dead in their tracks, unable to test for fear of any sort of bias impinging on their work. John Stevenson even suggests that bias can be infectious. So not only could testers be afraid to actually do any testing, they could also be unwilling to work in a team for fear of bias contamination? Haz-mat suits at the ready!
I’ve raised a lot of problems here and not really provided any remedies (but you can find excellent suggestions in the links I’ve provided above) but I’ll post another blog with examples of what I’ve done personally to combat bias at some point in the future.
#1 – Don’t Google Testitis, by the way – it doesn’t exist, and you’ll probably get some undesirable results 🙂
Bias comes up a lot because it’s crucial to judgement. And because it’s fascinating.
Try this: The Irrational Tester
(disclaimer – it’s mine).
The paper describes a bunch of biases, testing stories that relate to those biases, and de-biasing strategies. Theres lots of third-hand pop-science around, so you’ll also find links to those papers that directly describe the experiments that isolate and illustrate particular biases.
Video, too: http://www.stickyminds.com/Media/Video/Detail.aspx?WebPage=166
I like the way your mind works! Maybe though that’s a biased viewpoint.? 🙂
Personally, I think its great that the word comes up so much in testing. Five years ago, the concept of bias in testing was unheard of.
If you haven’t already read it have a read of this: http://www.stats.org.uk/statistical-inference/KlaymanHa1987.pdf