Like many organisations out there, we have witnessed a surge in automation over the last few years with a multitude of tools, frameworks, languages, etc. as we drive towards zero-click deployments for the plethora of squads/tribes within the business. During this time, I’ve slowly but surely witnessed talk about testing and automation/checking become synonymous (primarily from non-testers) as test-personnel within the organisation spend more and more time focused on writing/refining automated checks within their suites.
Some years ago when agile started becoming mainstream, there were proponents who were extremely vocal and forceful about agile practice (I recall Michael Bolton referring to them as Agilistas), almost to the point where they seemed somewhat rabid. I was all for someone getting excited about something positive, but by the same token I firmly believed they had to CALM DOWN!
The reason I mention this is because I see some resemblance in the type of human behaviour surrounding automation nowadays. Particularly from non-testers, where similar frenzied (almost fanatical) endorsements of existing automation (irrespective of how good/bad/fast/slow/thoughtful/shallow) and calls for more, MoRe, MORE! automation is underpinned by wild gesticulation, loud voices and (in some cases) frothing mouths.
The concern I have of course is that amidst the frenzy about automation is that the distinction between testing and checking is misunderstood/forgotten/ignored, supplanted by the belief that everything valuable about testing is embedded within the automation artefacts that they lavish praise on.
I recently got a chance to present into our VP a while back and I wanted to have something up my sleeve that clearly illustrated that testing and checking were not the same thing, but also to place the importance one has on the other. I also wanted to show what would happen if you took testing out of the equation.
Dan Ashby’s recent blog rang a few bells for me when he cleverly related activities to information. This was enhanced further by John Stevenson, who also offered another unique model after MEWT 4 last year. After borrowing from both I adapted it to my own needs – something which I think nicely illustrates the synergy between testing and checking, while at the same time, showing that they fall on different parts of the “information spectrum”:
As you can see I’ve borrowed from the former US Secretary of Defence, Donald Rumsfeld too, and while I’m no real fan of his I do like the fact his somewhat famous epistemology (“there are known knowns, known unknowns and unknown unknowns”) applies so well to testing. One could say that the art of checking lies firmly in the known knowns domain, while testing lies in the known unknowns and unknown unknowns domains. More importantly, if you take testing out of this equation, here’s what I believe you’re left with:
If you’re not testing, you’re not uncovering any information, and in your not uncovering any information, you’re simply confirming nothing more than speculation about what your product may (or may not) do, and there’s an enormous amount of information that you don’t (and may never) know about your product.
Can we really afford to focus on checking without doing any testing whatsoever?
I don’t think so.