Last week I posted Why It’s OK to Test Differently Than in Production. I ended up writing that post twice, but feel like I came at it from 2 different angles. I wasn’t sure what to do with the second post, so I’m just throwing it out there for you guys. I’m very interested in your thoughts, let me hear them in the comments!
Recently, I wrote about a project in which our engineers wrote automated tests for the API of a product. We only did major automated testing with integration tests.
Further, we ignored the UI in our automation and used a TestServer instead of the actual application server.
When I wrote it, I imagined colleagues shaking their heads, balling up their fists, slamming them down on the table, and hopefully finding a way to continue reading — regardless of how upset the idea made them.
If you were one of those folks, I’m hoping you’ll read this post, then go back and read the other one.
I will ask, however, to remember that every situation in testing and every application is different. Assessing these situations and knowing how to approach them typically comes over the course of a long career with lots of data points (i.e. failures) along the way.
Most of the time assessing how to test comes down to a few ideas:
- how big is the team (test to dev ratio)
- what is the experience of the test team with automation
- who will maintain the tests
- why are we writing tests (what benefit to we hope to gain)?
- where are we today with automated testing?
- what risk are we willing to accept?
In this case, the team was very small. Basically there was no traditional test team. There were 4 developers for an app that was over 63,000 lines of code (not huge, but more than trivial).
Since there was no test team, the second question (experience of the test team with automation) was moot.
The dev team, our automation craftsmen and release management would jointly work to maintain the tests over time.
We were writing tests as a means of reducing the risk of having no test team and reducing the labor needed for manually testing releases.
Our manual testing accepted a large amount of risk. While we tested major functional areas, we didn’t dive deeply and our test plans were very high-level. Pressure from the stakeholders to storm the market, coupled with a small team and relatively small application (although complex) forced us to accept a large amount of risk prior to automated testing.
On the other hand, we had a very senior and accomplished team combined with stakeholders who knew the domain very well. This helped support the decision to accept as much risk as we did.
Given this context, I’m hoping it’s becoming easier for you to understand the decision to use only integration testing as a starting point for automation.
Any automation was going to reduce the risk we started with. Focusing our automated testing on the backend (where the majority of the logic resided) allowed us to find significant issues more quickly.
It also showed developers where an issue was (UI or Backend) immediately.
Additionally, the client was committed to unit testing and end-to-end testing as we moved forward.
While I would not suggest this approach for every client, the confidence the client gained due to our introduction of integration tests was enormous and allowed them to:
- Move toward deeper use cases with manual testing
- Focus their manual testing on untested areas of the application
- Know immediately when changes to the code broke existing functionality
- Gain an insight into the gap between product owner expectations and behavior of the delivered product
There were also technical benefits that I’d like to expound on in a later post, like:
- Forcing decisions regarding the test framework’s data strategy
- Becoming more familiar and comfortable with the underlying application (than e2e would have allowed)
- Creating fixtures & scaffolding we can use later in other tests (like e2e)
Thanks for reading, and I’d love to hear your thoughts about this approach in the comment section!