I want to share a success on one of our recent projects and how our journey may help your team. I want to share as much as possible keeping in mind how much we respect our clients’ privacy here at Beaufort Fairmont.
A little generic information, before we dive in…
The application under test is an API written in C# with an SQL Server database. We use Entity Framework 6 as an Object Relational Mapping.
The front end is a single-page app using AngularJS for a responsive user experience.
For e2e testing, we use SpecFlow (Cucumber for .NET) and Selenium WebDriver. We tie this into Jira through a few plugins to allow us to unite user stories and acceptance tests with automated testing for full test traceability.
Over time we’ve throttled back on our e2e efforts as the client has found the benefit of e2e tests outweighed by the time and effort we’ve invested in trying to get WebDriver to wait appropriately for async calls.
For integration testing and unit testing (where we’ve focused the majority of our efforts and seen the largest benefit) we use C# and NUnit without a test runner like SpecFlow.
Our Integration tests use 2 main strategies for interacting with the application:
- We use the API as one seam and the database as another &
- We use the Business Logic Layer as a seam to exercise code when we need to isolate a particular piece of logic.
The bulk of our test cases reside here, in the Integration Testing layer.
We run through a TestServer from the Microsoft.Owin.Testing package. This allows us to skip the pesky setup, install, config and maintenance of IIS – not to mention all of its code.
I could do an entire post countering all the objections to this set up. Unfortunately, you’ll have to wait on my reasoning until I write that post. For now, just trust me. I know we’re not testing the system as lives in the wild. It is a risk we mitigate in other ways.
Over time, the client has realized numerous benefits of focusing on the Integration Testing layer:
- Ostracizing the UI layer and focusing on integration testing against the API tells us (implicitly) when an issue is in the UI vs (explicitly) when it is in the API or backend
- Tests are faster than UI tests
- Feedback is faster
- Developers are more willing to run these tests on their machines, so many defects are found & resolved before the first commit
- Because tests are in the same programming language as the app, developers are more willing to understand & modify them
- Technical complexity is lower
- Setup, maintenance, and troubleshooting are simplified
- The client now focuses on defects and potential problems rather than WebDriver wait times
- Using this approach, we’ve been able to move from 17% coverage to 77% coverage in less than 21 weeks with the equivalent of one Beaufort Fairmont Automation Craftsman.
After 23 weeks automated tests trailed development work by only 1 sprint (2 weeks).
Cumulatively, we’ve created almost $1M (yes, one million dollars – pinky in mouth, Dr. Evil-style) worth of testing in less than 6 months.
The client releases code as often as they wish, rarely finding more than two or three issues during supplemental manual regression.
My goal in writing this was to show one real-life success we’ve tallied recently and to shed light on the technologies involved.
In other projects we use other stacks, technologies and approaches.
What are your thoughts about this scenario? How does your team approach testing differently?
[…] a recent post about the technology in a major automated testing success, I mentioned I could write an entire post about why it’s ok to test systems in configurations and […]