I recently overheard a colleague make this statement: “If you’re creating thousands of tests, you’re doing it wrong!”
It was the wrong time and place to challenge him, so I have to assume the very best. I’ll give my friend the benefit of the doubt. I’ll assume that his experience is limited to situations in which this assertion proved true.
My experience, and that of our team here at Beaufort Fairmont Automated Testing Services, is that sometimes thousands of tests make sense.
If my friend was limiting his context to UI testing or E2E testing, then he’s absolutely right. Developing automated UI test cases is slow and painful. If you’re writing thousands of tests in the UI, yeah… you’re probably going down the wrong path!
But at Beaufort Fairmont, we don’t stop at UI and E2E testing. We stop when you (the customer) feel confident in the System Under Test. When you feel all the major risks and nightmare situations are accounted for by test cases, we know success is a certainty for you.
Most of the time, that requires white-box testing.
If the words “white box testing” scare you, they shouldn’t. I’ll make sure to post something about why soon, but for now, try not to hyperventilate!
Our software engineers focus solely on the problem set of automated testing. When they get a look at your code base, they immediate see what needs to be tested and how to test it. Further, they’re able to dive deeper and challenge fundamental assumptions that the UI may disguise.
It takes exceptional skill to create test scaffolding around what is usually legacy code. But doing so empowers testers to interact with parts of the system they otherwise wouldn’t be able to. It allows them to interact with the underlying logic of the system in ways the UI may not allow. In doing so, testers are able to challenge the logic of the internal system against stakeholders’ requirements.
It also allows them to create many test cases quickly. Sometimes thousands.
It’s also worth noting, that we tend to write short test cases for several reasons – among them is ensuring a particular test case only fails for the reason it was designed. A similar concept to the single responsibility rule.
This allows the test case to run quickly (for easier debugging), but more importantly, it forces concise feedback. This makes it easy to know what failed and why. At a glance, the tester knows what’s wrong when the signal-to-noise ratio is high. This removes the burden of combing through reports and logs to learn what failed and why.
While I’m sure my colleague had his reasons for his statement, it’s important to consider context and intentions. Our goal isn’t to implement testing in an automated fashion, it’s to create confidence in the system under test.
Many times thousands of tests are helpful, faster, and more telling than long UI or E2E tests!