I had an amusing experience a few weeks ago. A colleague of mine was explaining how he was automating a regression test set and how it greatly enhanced their testing experience. I believed him, because I know he’s a professional worker with good testing and automating skills.
But I wanted to tease him a bit, as I’ve learned that he sometimes mixes up terminology and definitions.

I asked him: Today, I asked for an extract of a certain table from our database. I visualized it in excel, added some formulas and conditional markup. Next, I sorted on that markup and data errors were immediatley visible.

Was that automated testing or manual testing?

To many people, it’s a confusing question. Hell, it’s even confusing for me.
A better name might be “tool-enabled-checking/testing”. But this doesn’t clarify whether my example is testing or checking.

I noticed that we, testers, see checking and testing as surrogate terminology for automation and manual. This is not completely true, it’s too easy. There’s bound to be a gray area. A twilight zone where testing overflows in checks.

One tests a theory. My theory was that the table I selected might contain errors. I devised a formula in which I could show these mistakes. This is where theoretical testing flows over into the actual checking. The moment I start working in the excel sheet I’m beginning my practical checks. But checking is still a subset of testing. Be they tool-enabled or not, checking has to be done to either, prove or falsify your theories.

There’s a funny ebb and flow kind of feeling to testing and checking as they tend to alternate. While exploring your product you come across new ideas constantly. Further exploring ideas in depth or jumping to the next idea or even exploring ideas within ideas makes the whole concept extremely fluid.

Black on white, in the given example one goes into checking as soon as you start comparing your data to it’s expected result. But while you are doing it, you’re still testing. There is much verification, validation and checking in testing, but it’s the testing itself that’s interesting and which can result in infinitely more important paths to follow.

Be wary, that the above mentioned explainability heuristic does not hold into account false positives and negatives, human epiphanies (oh! That looks weird, let’s take a closer look) and more human errors by the analyst, developer, tester or anyone else whose had a say. It serves to show that “automated” is not the same as checking. It can be used to explain that automated checks are a subset of all possible checks and that all possible checks are a subset of Testing.