In a recent post about the technology in a major automated testing success, I mentioned I could write an entire post about why it’s ok to test systems in configurations and environments different from production. This is that post.
I love integration tests. Whether I’m hitting an API and checking the database for changes or interacting with custom modules, I love getting deep into customer code.
While this isn’t appropriate for every team and every situation, it is one of my personal favorite approaches when it is.
Integration tests interact with system components more intimately than end to end tests. They know more about the internals of the system. In most cases, they use code to stimulate a component rather than user actions.
Many times this means configuring the code under test in a non-production like manner. Sometimes it means creating scaffolding (supporting code for tests) in order to work with a component.
For instance, we may need to run a web service in memory rather than starting up an app server.
It may mean we load up a customer library and fake data access layer to listen for how it attempts to interact with a real data access layer.
It could mean simulating input from a 3rd party service and listening for responses within the code under test.
There are many, many other ways to set up integration tests.
Generally we start writing these tests while trying to learn what the code actually does. We’re working toward a point when the test cases ensure the code does what we expect. This is the “feedback” I mentioned earlier.
We’re working to ensure that this component adheres to an agreed upon contract. The expectations and set up of a test case persist this contract. A report of a test run shows us degree of adherence.
When we understand whether components adhere to their contracts, we understand whether they have the ability to contribute to executing user level expectations (without needing a user or the overhead of a UI test).
This gives us a high level of understanding the code.
Am I suggesting that an application team should only do integration testing? No. Absolutely not.
Am I saying that one set of integration tests set up one particular way should be enough to provide us a high level of confidence that the application does what we believe it should? No.
So when is this a good practice?
- When other test cases aren’t giving the feedback you need
- When there is significant doubt or lack of confidence in a particular component
- When other test cases are giving you the feedback you need but too slowly or with too much noise
- When external components are changing
- When you have no other options for testing
- When you do not understand what a component does
One of the biggest impediments to highly effective automated testing is perfection. If this post does nothing else for you, I’d like it to give you confidence in moving away from requiring perfection in your automated tests and your testing environments.
Sometimes it’s ok to use tools like integration tests as an iteration in the direction of better automated tests. Sometimes it’s ok to build confidence in the codebase. Sometimes it’s ok to delay test cases that are perfectly identical to what a user would do in lieu of getting SOME feedback ASAP. Sometimes its ok to launch a piece of code in an environment that is nothing like production if it helps your developers avoid writing code with defects.
I can’t possibly list all the advantages of writing integration tests, but I will tell you, it can reduce the time of finding defects by many multiples. It can reduce the problem set involved in automated testing and provide functionality needed for other levels of testing.
Don’t let a quest for perfect test cases and perfect configurations get in the way of all the benefits of other non-comprehensive testing solutions, like integration testing.
[…] week I posted Why It’s OK to Test Differently Than in Production. I ended up writing that post twice, but feel like I came at it from 2 different angles. I […]