Organizational Patterns for Automated Testing
I originally posted this in 2013. Since then, I’ve rethought this post, originally titled “Write Tests Faster”. I’ve modified this post to reflect my current thinking. First and foremost, the title changed because this post is really about how technical decisions effect your team. Enjoy…
Some of the biggest gains in test productivity come when testers write automated tests quickly and easily.
Here are a few things we’ve tried and seen fail, followed by what we’ve found to be best practices.
Record and Playback
That’s what record and playback insinuates, right? Why would you need a skilled Automation Engineer or thoughtful QA Engineer to work on this if you can just record and playback the results?
Scripts that are recorded automatically in an automation tool are notoriously difficult to maintain.
What tends to happen with these tools is:
- We waste time rewriting test cases every iteration or release due to changes in SUT
- We waste time rewriting test cases every iteration or release due to changes in environment
- We waste time looking for defects in test scripts, rather than testing the product
Rarely is much time saved using record-and-playback tools. Instead teams end up spending more time fighting with these tools than testing. Rarely do we gain any confidence in the System Under Test(SUT).
NOTE: I have heard of the existence of a white paper in which Google touts a method of record and playback that beats using Software Engineers in Test to produce test code. I’ve never found the article. If someone could point me to it in the comments, I’d be very appreciative!
Teach Everyone Programming
want to do it as a job instead of testing
This is clearly a false premise. If a tester wants to program she’s either already started learning or will do so on her own. Requiring it for her job is rarely motivating.
Those who don’t want to program are more abundant in test departments.
Multiple times, I’ve taught test departments Java, C#, etc. in order to build out test systems. What I’ve found is that there is a minority of people that want to do this, and the rest glaze over by the time I start teaching the difference between objects and primitives.
There’s something else that happens here — we devalue the jobs, roles, and skills QA Engineers already have.
There is a great deal of value in the way we as Testers think: how we can quickly absorb a domain; challenge fundamental assumptions; find corner cases; and execute a wide spectrum of user behavior.
If training large groups of manual testers in a programming language is mandatory to use the tool you’ve selected, you’ve either chosen the wrong tool or decided on the wrong team strategy. There are many tools available that allow testers to create test cases without writing a line of code while ensuring maintainability.
Forcing people to program that don’t have an affinity for it is a recipe for disaster.
Hire All Automation Engineers
If testers must have programming experience to write a test case, you have several problems. First, the cost of the tester is higher. Second, every test becomes a technical problem in need of a solution. Finally, you’re likely to end up with a bunch of programmers and very few experienced QA Engineers with the skills I mentioned above.
There are a number of other issues I’ve seen with this. For instance, few Automation Engineers have created code for a production-level system. The most important place for production-level quality of code is in the testing of production code. If we can’t trust the test code, what can we trust?
A Best Practice
Most teams need a mixture of automation engineering skills, domain expertise, and testing skill.
Many times, picking a tool that allows anyone to write a test case will benefit the team and increase the speed of writing automated tests.
What I’ve found is that the faster test cases can be written, read, reported on, maintained, and understood, the quicker the team gains confidence in the SUT.