Posted by IN / 0 responses

Define the Interface

23 March 2018

Something I’ve been talking about at conferences as a part of several of my talks (See “From the Inside Out“, and “Technical Deep Dive: From the Inside Out“), has been the tactic of “Define the Interface”. In fact, I brought it up recently at TISQA 2018, as a part of my “Beyond the UI” talk. I really appreciate the 100+ that crowded the room (if you missed it, there is something similar on this recorded webinar.)

“Defining the interface” is a key technique in the process of getting testing and development synced up in agile environments – a major problem for many teams who want to create automation in-sprint. Here is a brief description of how you do it.

The Sprint Planning Meeting

The key time to start on this is during sprint planning meetings. When you’re committed to test automation in sprint, you want to ask the question, “how are we going to test this?” And embedded in that question is “how are we going to test this with automation?”

There are many ways to automate. There are many levels and tools and frameworks. But I’ll talk about all those things in another post sometime. But the main question here should be getting us focused on the idea of automation being a first class citizen and a product of the sprint – just like the feature code.

Get Tests Written Before Production Code

The biggest reason most teams don’t sync up automation for testing and development during the sprint, is they feel they have to wait for development to finish their tasks in order to begin. This feeling is normal. I think most testers and many test automation engineers haven’t been exposed to the idea of writing tests before code under test is finished. How would you do it? What would you test? How do you know what to write?

Agile methods like scrum help with this. A story tells us what the user is attempting to accomplish. A task is a piece of this story that can be accomplished and verified. If we truly know what a task is and what needs to happen, then we should also know how automation will interact with it. This point of interaction is what I call “the interface”.

The Interface

This is the point at which the test code will connect to the code under test. When we asked “How will we test this?” a criteria for answering this is making sure we define this interface.

Let’s take the example of an API endpoint.

Jill is the developer on the project and has chosen the task of building a microservice to return a list of usernames in the system. Jill is in the sprint planning meeting with Jack, the test automation engineer, as well as the rest of the team. The team asks “how are we going to test this?”.

Since all other APIs on the project are restful, this one will be too, Jill & Jack agree.

Jack is familiar with them, and knows that he can easily write a test case in RestAssured for this. He clarifies that this will use the “GET” method and the url. There are no parameters to this, but it uses the same authentication.

Now, if it had been more difficult to come up with this, they may have “parking lotted” this conversation for later. Implementation details for test automation just like for feature code only belong in these meetings to the point that they serve the team. When the team is not being served by the conversation, or the planning isn’t moving forward because of it, talking about implementation details may need to halt until a more appropriate time.

Jack now knows what the interface looks like. He can start building the test. And in order to make sure his test works, he just needs something to test it against.

The Fake SUT

This is a part where a lot of teams miss out. Testing the automation code in advance of the application code’s development is very important. We want our test code to be as correct, defect free, and resilient as possible. To do that you need to test it against the SUT. But the SUT isn’t built yet. So what do you do? You fake it.

Jack uses mockable.io to build out an endpoint in exactly the same way the real endpoint will be created. He takes an educated guess at what the JSON from the endpoint will look like and mocks out the response. He creates his restassured code and tests it against the mock endpoint. Now he knows his test code will work under some circumstances.

Keep in mind, by using an overly simplified, fake SUT to test his test code, Jake simplifies the problem of testing automation code by avoiding the biggest variable test code has as a dependency. The actual code under test! Jack doesn’t have to figure out if the problems he finds are with his test code or the SUT, because he’s not using the SUT. Every variable in writing test code is completely under Jack’s control.

Sync’d

When Jill is done, she and Jack change the url the test is pointed to (from mockable.io to the real endpoint). Jill runs her code against it before committing and removes the “NotReady” tag from the test. The test is now ready to run in CI.

When Jill commits, the test passes and the task is complete.

FAQs

What if I don’t use an API?

My friend Angie Jones does a similar activity with UI testing. She asks the team to define the names of elements for a Selenium webdriver test. That way, the test automation engineers can write tests first. Some teams have a convention for naming so that we know the naming of elements before they are created and they are predictable.

Other teams may have monolithic codebases. All you need is a seam for automated tests – an input and an output. Maybe your interface is an interface or abstract class in Java that exists to give you a way to write your automation to a method stub prior to application code being written.

Other teams may not be able to get so deep into the codebase. Maybe in the situation of a process that picks up a file for processing as the input to a test case, we put a file in that place. The interface would be a system location, file location, and time to place the file. If the output is in a database, for instance, whether the table is there or not we can write a query. We can even create the file location just for testing and a fake table, just to test the query the automated test will use.

There are infinite number of “interfaces” to the systems out there. We just need an input and an output and to agree with developers on how that input and output will work.

What about when the design changes?

Yes, the developer WILL change the interface while writing the code. Expect this. Don’t be dismayed when they change it. It’s an opportunity for you as a test automation engineer. How can you modify your test code to work with the change?

Many people will argue, “if the interface will change, there’s no value in writing code for test automation.” But what this view misses, is that modifying existing code is far easier for most than creating it from scratch. Also, when we make the assumption about the interface and start using it, we can create many test cases with that fixture code. Changing the interface is a small task and when written well, a small change in the codebase that is independent of the tests.

Testers will be able to write test cases more easily against a faked or mocked out SUT than if they have nothing.

Finally, having an interface, even if slightly wrong forces thought and conversation about testing the coming SUT that most teams don’t have until late in the cycle, when testing is already behind and test automation hasn’t yet been written. Why not use a proven technique like this one to force the conversations earlier in the iteration and build higher quality code from the beginning?

I’d rather be slightly wrong, know it, and close to done than not started, ignorant (of whether the test code I intend to write will work), and believe I’m right.

There is value in building automated tests even to a slightly wrong interface, and it far outweighs waiting to build test code until after the SUT is built.

Paul, are all these your original ideas?

No. Just like everyone else, I stand on the shoulders of giants. The last section of Kent Beck’s “Test Driven Development by Example” uses concepts like this in creating XUnit. He touches on key concepts of this there. The idea of a seam is Michael Feather’s from his “Working Effectively with Legacy Code” book.

I started using this technique in 2002, but didn’t really understand what I was doing and how to repeat it until about 2010. I believe I first spoke about this in about 2014.

Let me know how it goes with this technique and what you do differently that could help people in the comments!

 

Photo by Thomas Jenson used with permission from Unsplash

About the Author

Paul Merrill

Paul Merrill is Principal Software Engineer in Test and Founder of Beaufort Fairmont Automated Testing Services. Paul works with clients every day to accelerate testing, reduce risk, and to increase the efficacy of testing processes. You’re Agile, but is your Testing Agile? An entrepreneur, tester, and software engineer, Paul has a unique perspective on launching and maintaining quality products. He also hosts Reflection as a Service, a podcast about software development and entrepreneurship. Follow Paul on Twitter @dpaulmerrill.

Leave a Reply

Your email address will not be published. Required fields are marked *