Software Testing Anti Patterns
Since the dawn of computers, we’ve always had to test software. Over the course of several decades, the discipline of software testing has seen many best practices and patterns. Unfortunately, there are also several anti patterns that are present in many companies.
An anti pattern is a pattern of activities that tries to solve a certain problem but is actually counter-productive. It either doesn’t solve the problem, makes it worse, or creates new problems. In this article, I’ll sum up some common testing anti patterns.
Only Involving Testers Afterwards
Many companies only involve the testers when the developers decide a feature is done. The requirements go to the developers, who change the code to implement the requested feature. The updated application is then “thrown over the wall” to the testers. They will then use the requirements to construct test cases. After going through the test cases, the testers will often find all sorts of issues so that the developers need to revisit the new features. This has a detrimental effect on productivity and morale.
Such an approach to testing is used in many companies, even those that talk about modern practices like Agile and DevOps. However, “throwing things over the wall” without input from the next step goes against the spirit of Agile and DevOps. The idea is to have all disciplines work together towards a common goal.
Testing is about getting feedback, regardless of whether it is automated testing or not. So of course you have to test after the feature has been developed. But that doesn’t mean you can’t involve your QA team earlier in the process.
Having testers involved in defining requirements, identifying use cases, and writing tests is a way to catch edge cases early and leads to quality tests.
Not Automating When You Can
Tests that run by the click of a button are a huge time saver, and as such they also save money. Any sufficiently large application can have hundreds or even thousands of automated tests. You can’t achieve efficient software delivery if you’re testing all this manually. It would simply take too much time.
One alternative I’ve seen is to stop testing finished features. But due to the nature of software, existing features that used to work can easily break because of a change to another feature. That’s why it pays off to keep verifying that what used to work still works now.
The better alternative to manual testing is to automate as many tests as you can. There are many tools to help you automate your tests. From the low level of separate pieces of code (unit tests) over the integration of these pieces (integration tests) to full-blown end-to-end tests.
As a tester, you should encourage the whole team to be involved in manual testing. It will encourage them to write code that is fit for automated tests. Help developers write and maintain automated tests. Help them identify test cases.
Expecting to Automate Everything
As a counterargument to my previous point, be wary of trying to automate every aspect of testing. Manual testing can still have its place in a world where everything is increasingly automated.
Some things could be too hard or too much work to automate. Other scenarios may be so rare that it isn’t worth automating, especially if the consequences of an issue are acceptable.
Another thing you can’t expect to automate is exploratory testing. Exploratory testing is where testers use their experience and creativity to test the application. This allows the testers to learn about the application and generate new tests from this process. Indeed, in the words of software engineering professor Cem Kaner, the idea behind exploratory testing is that “test-related learning, test design, test execution, and test result interpretation [are] mutually supportive activities that run in parallel throughout the project.”
Lack of Test Environment Management
Test Environment Management spans a broad range of activities. The idea is to provide and maintain a stable environment that can be used for testing.
Typically, we call such an environment a testing or staging environment. It’s the environment where testers or product owners can test the application and any new features that the developers have delivered.
However, if such an environment isn’t managed well, it can lead to a very inefficient software delivery process. Examples are:
● Confusion over which features have already been deployed to the test environment.
● The test environment is missing certain critical pieces or external integrations so that not everything can be tested.
● The hardware differs significantly from the production environment.
● The test environment isn’t configured correctly.
● Lack of quality data to test with.
Such factors can lead to a back and forth discussion between testers, management, and developers. Bugs may go unnoticed or reported bugs may not be bugs at all. Use cases may be hard to test and bugs reported in production hard to reproduce.
Without a good test environment management, you will be wasting time and losing money.
Unsecured Test Data
Most applications need a set of data to test certain scenarios. Not all data is created equal though. With modern privacy laws, you want to avoid using real user data. Both developers and testers often have to dig in the data of the test environment to see what is causing certain behavior. This means reading what could be personally identifying information (PII). If this is data from real users, you might be violating certain laws.
Moreover, if your software integrates with other systems, the data may flow away from your system to a point where it is out of your control. Maybe even to another company. This is not something you want to do with real people’s data. Security breaches can lead to severe public image and financial losses or fines.
So you want either made up data or obfuscated / secured data. But you also want to make sure that the data is still relevant and valid in the context of your application. One possible solution to this is to generate the data your tests need as part of your tests.
Not Teaching Developers
The whole team owns the quality of the software. Pair with developers and teach them the techniques so that they can test the features as they finish them.
This is especially important in teams that (aspire to) have a high level of agility. If you want to continuously deploy small features, the team will have to continuously test the application. This includes developers, instead of having them wait for the testers.
In such a case, the role of testers becomes more of a coaching role.
If testers and developers don’t work together closely, both will have negative feelings for each other. Developers will see the testers as a factor blocking them from moving fast. Testers will have little faith in the capacity of the developers to deliver quality software.
In fact, both are right. If the two groups don’t collaborate, precious time and effort will be lost in testing a feature, fixing bugs, and testing the feature again. If the developers know what will be tested, they can anticipate the different test cases and write the code accordingly. They might even automate the test cases, which is a win for testers and developers.
Streamline Your Testing!
The major theme in this article is one of collaboration. Testers and developers (and other disciplines) should work together so that the software can be tested with the least amount of effort. This leads to a more efficient testing process, fewer bugs, and a faster delivery cycle. Top that off with good test environment management (which is also a collaborative effort) and secure data, and you have a winning testing process.
Author Peter Morlion
This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.