Or how programmers and testers can work together for a happy and fulfilling life.
Why don’t we just automate all the testing? Is test coverage a useful metric? What does it mean to “shift testing left”? When and where should we be testing? How much is enough testing?
Over the years I have discussed these and similar questions many times, with programmers and testers and various other folks. These are important topics and they are often shrouded in confusion, misunderstanding, and tribalism. I have heard from both camps that programmers should / should not be writing tests, are / are not qualified, do / do not even understand testing, and so on.
We usually end up in a better place than where we started, so in this article I want to share some of the discussions we have so that you can have them too.
Much of the confusion stems from a lack of understanding of the purpose of testing, including, ironically, with many testers that I meet, so we don’t even have a shared frame of reference.
To create this frame I want to look at a couple of topics, namely:
From here I will address each of these opening questions and discuss how testers and programmers can collaborate for a happy life. I hope this will cause you to reassess the discipline and the domain of testing, whatever your role, and to engage with it as the first-class work that it is.
It is a long read, so grab a cup of tea and let’s get started.
Whenever we change software — adding a new feature, changing or replacing a feature, making “under-the-hood” changes to improve things — we incur risk. For any change, there is a non-zero likelihood that we cause a Bad Thing to happen.
This is true not only of the code itself but of its build system, its path to deployment, its operating environment, its integration points, and any other direct or indirect dependencies.
There are many types of Bad Things that can happen. Here are a few: