Last week, on the TestersIO slack the following question was asked:


The person was looking to find out whether other testers consider it OK to be called upon the quality of their bug reports by a tester more junior than them.

This is, of course, completely dependent on both parties and the way they choose to handle the situation.

  • Are you lecturing the person, or are you requesting more detail?
  • Does he choose for the emotional response, or a more reasonable one?

Another participant of the discussion suggested to invite the developer who’d handle the bugs to explain why the reports were insufficient.

This led me to develop a visualization of something that had been growing since reading Gerald Weinberg’s “Perfect Software, and other illusions about testing”.

An explainability heuristic: the Responsibility Meter

I’m not sure anyone ever told me this specifically, but everyone seems to be in agreement: Testers search, coders fix.

Looking more closely, nothing is ever that easy.

Why wouldn’t both roles be able to do both activities?
Another misconception is the responsibilities range over two activities, while they insinuate many more.

The road from “Hey, this looks strange…” to “This is an issue” to “The issue is fixed and nothing else seems to be broken” is often long and complex but always context-dependent.

The responsibility meter is a tool to support discussions.
If you find yourself dealing with:

  • Over-the-wall development
  • Ignorant co-workers
  • Unhelpful bug reports

This may be a good step towards a solution:

Responsibility meter

  1. The first scale visualizes the road from discovery to identification of the issue. This is where most of the discussions takes place.
  2. The second scale depicts what happens after the identification. Activities on this scale, but not added, could be debugging, refactoring, adding checks, trial and error, troubleshooting, further testing…

In the situation that a tester thinks his job of describing bug ends at discovery
and a coder expects the bug to be completely pinpointed, isolated, generalized, maximized,… and documented in enough detail before he starts fixing.
Then nothing ever gets fixed, at least not efficiently.

Most of the time, there is no need to explicitly set the marker.
Awareness of the scale is usually more than enough.

There are situations, however, where you need to have the talk about responsibility. Where it starts and where it ends.

It is not unusual that developers expect more detail, but that the testers aren’t willing or able to give it. Miscommunication leads to tension.
Tension leads to many more and worse problems.

It might be necessary to reset and adjust the meter a couple of times during the project, or make exceptions for certain special cases.
You should note that the scales are not set in stone. The activities may switch places or be skipped completely. Use it in your context to the advantage of the team.

This meter uses Cem Kaner’s Bug Advocacy heuristic RIMGEA (Bug Advocacy, lecture six) and Gerald Weinberg’s Discovery, pinpointing, locating, determining significance, repairing, troubleshooting and testing to learn. (Perfect Software, pg.33-36)