5 Ways to Simplify Your Automated Test Cases > Beaufort Fairmont

5 Ways to Simplify Your Automated Test Cases

Maintaining test automation can take a lot of time. So can understanding reporting for your test automation. Fortunately, you can greatly speed those things up.

A big part of my consulting practice is helping clients with test automation. And with client after client, I see testers, test automation engineers, and developers creating test cases that are long, that are difficult to work with, and that don’t have a clear purpose. If their test cases were more streamlined and focused, the teams that use them would save a lot of time.

Here are five pointers for improving test cases, garnered from my years of working with clients who are implementing test automation.

1. Decrease your scope

Testers tend to be holistic. We like completeness. We think of use cases broadly, from one end of the system to the other. We want to know all the breadth and depth of our systems under test. That’s a great thing … sometimes.

The scope of a test case should depend on the intent of the test case. In exploratory testing (a term coined by Cem Kaner in 1984, and a concept expanded by Elisabeth Hendrickson in her book Explore It!), you define a charter for a session of hands-on testing. That charter may be limited to a feature or set of features you want to learn about in the system under test. Because the testing is exploratory, you’d do several types of experiments with the features: long sequences of experiments, different sequences of actions, and permutations of actions, all for the purpose of exploring the application and finding issues.

Test automation, however, does not explore. A major reason to create test automation is to provide a mechanism to alert you when the system under test (SUT) is doing something different from what you think it should. Long, meandering test cases that were recorded or scripted when a tester was in the exploratory mindset may inform our test automation, but they do not direct our scripts.

So determine what you want test automation for, and then narrow your test’s scope to that part of the feature.

For instance, let’s say you have a test script that is supposed to tell you whether a change of password in the SUT’s user profile worked. This script logs in, goes to the profile section of the site, verifies that the profile image is correct, creates a password, changes that password, tries to change it again, logs out and logs back in, changes the password a third time, tries some passwords that shouldn’t be accepted, and sees if the email address is correct.

That is a busy script! It’s a great sequence of events for exploring functionality. But it goes well beyond the scope of the test we should be writing. Verifying that the change of password works doesn’t require looking at the profile picture or verifying the email address—that’s all noise. You might want to automate them as well, but it’s better to split them up to fit separate test cases.

You could, for example, write positive and negative test cases: one to change the password and verify the change, and one to verify that incorrect passwords are rejected. Some of the test cases could be data-driven to avoid duplicate code. The important thing is that each test is specific to its purpose, has limited scope, and has less code to execute and maintain over time.

When you’re thinking about how to limit scope, imagine the people reading your tests later. Will they be able to easily understand why the test case exists? If they can’t understand the intent behind the test, they can’t maintain that intent.

 

Get the 5 ways cheatsheet here!

 

2. Fail for one, and only one, reason

I believe most test cases should fail for one and only one reason. If your “Valid user logs in” test case only verifies that a valid user has logged in, then you can quickly start working through a problem flagged in the test automation report. If, on the other hand, the “Valid user logs in” test case could also fail because it verifies the page title, the copyright on the bottom of the page, and the company logo in the header, you have a lot more troubleshooting to do. Which verification point failed? Why did it fail? Did more than one fail?

In general, keep test cases to one verification point or tightly grouped verification points that all work together to tell you whether a feature works as expected.

Similarly, don’t build verification points into the navigation utilities in your framework. You don’t want failures because you have verification of navigation in a test that’s not checking navigation. These are runtime failures, not indications of whether this test succeeded in using its intended functionality.

3. Identify responsibility (and hold to it)

Similar to limiting scope, asking, “What is the responsibility of this test?” can be very helpful. Uses of “and” and “or” may indicate that the test case has more than one responsibility. If you can’t state the responsibility of the test case easily in one sentence, your purpose in writing the test case may not be clear. Just remember: As with writing code or formalizing and communicating a concept, it is much more difficult to write a clear, concise test case than it is to write a long, meandering one.

4. Ask, “What is the simplest thing that could possibly work?”

Ward Cunningham and Kent Beck were talking about making progress when programming, but I like to apply this quote of theirs to test automation: “Given what we know right now, what’s the simplest thing that could possibly work?”

Ask yourself: Are you verifying things the simplest way you could? Are you making the test case more complicated than it has to be? Is there an easier way to get the data you need? Is there a simpler way to navigate to the section of the app you need to get to? Can you do the same operation with fewer steps while still making the test clear?

5. Avoid unnecessary dependencies

Avoiding dependencies between test cases is hardly unusual advice, but it remains one of the best ways to simplify automated tests. The problem is that it’s difficult to be aware of dependencies. So make a conscious effort. If you have test cases that can run in only one order or can’t run in parallel, find out why. If you depend on actions that aren’t relevant to your tests, find out why. If you can avoid it, do.

Keep it simple, stupid

I see many teams struggling with the maintenance and upkeep of test automation. One of the most common problems for them is the way they are designing their automated test cases. If you’re in one of those situations, look at your test cases and consider these actions: decrease scope; fail for one reason; identify responsibility; ask, “What’s simple?”; and avoid dependencies.

These simple steps have helped me over the years. They can help your team too.

 

Download the convenient reference sheet for these 5 Ways of Simplifying Test Cases

 

This article originally posted on TechBeacon 8/23/17. It is reposted here with express written permission from TeachBeacon.

 

10 responses to “5 Ways to Simplify Your Automated Test Cases”

  1. Paul,
    Your last two points (Test Case Dependencies, and how to avoid them along with KISS) are what I always tell people as a way to keep automation complexities under control. Regarding dependencies I tell people to categorize using the following criteria:
    1) Data Dependency – Does the script depend on certain data to be setup or data conditions to exist.
    2) Run Order Dependency – Does the script rely on another test to be run first or as part of its own execution to set a certain system condition/state.
    3) Data and Run Order Dependency – Does the script rely on the previous two items. Meaning does the script rely on data produced from a previous script or the condition of the data set by the previous script.
    4) Independent – Script does not rely on another script for system state or data, can be run independently. It either creates and cleans up its data and/or state of the system is set and reset as part of execution.

    This way I can categorize, group and prioritize tests to run effectively. Works for me so far. Other peoples mileage may vary.

    • I love these, Jim! Very well thought out. I have about 50 pages on the idea of data and how to manage it for test automation. I need to do something with it soon. Ping me privately if you want to review or just take a look.

  2. These days, I look at the phrase “Keep it simple, stupid” and I read it as two instructions – “Keep it simple” and “Keep it stupid”. In other words, focus on the basics, even to the point of drilling right down to what may seem like the most basic, knee-jerk action and its outcome, because those most basic of actions are the building blocks that all our applications are based upon. And it they’re wrong, then all the sophistication in the world won’t save them. Instead, look at the simple, and then go down one more level to the stupid.

    • Sounds like good advice! I’ve said many times before when speaking or in my webinars, that when I look at my own code from 2 weeks ago it looks foreign. So, like you, I want to make it so that anyone can understand the code and the design/ideas/intentions behind it. I really benefited from Martin Fowler’s book “Refactoring” in that regard!

  3. Hey Paul,
    You always provide good insights and information. So thanks for that. I wanted to get your thoughts about your second point on “Fail for one, and only one, reason”. This has been an argument that I have been having for multiple years and I am always looking for different perspectives. I disagree with that statement when it comes to something like acceptance tests, especially GUI. For these kinds of tests, I believe that it is better to “run fast, fail fast”. If for example you are checking login, wouldn’t you want to first validate the the correct page actually opened, prior to interacting with any elements? If you don’t, then when you try to type in the login information, it will fail saying that elements don’t exist. Which is true, but these are the symptoms of the root cause rather than the actual reason of why the test failed. To further continue this scenario. Let’s say you wanted to do something on the next page after you login. Now, if you don’t assert that the user logged in successfully. Again, the test will fail saying that an element wasn’t found. Again, this is true, but misleading and confusing to see such an error message. I believe it would be less confusing to see that “the user wasn’t able to successfully login”. Which is the correct issue. Not one that happens upstream. Therefore, I think it’s important to validate multiple times in something like a GUI automation test. Not 100% sure if this would apply at different testing levels. But I have a feeling it might. What do you think?

    • Nikolay, Thanks for taking the time to read this and making the extra effort to share your thoughts!

      I guess first I’d note that this list is “5 ways to simplify test cases” as opposed to “5 things everyone should always do in test cases.” There is a time and a place for things. You’ll rarely hear me advise clients or others with terms like “always” and “never”. For instance, in your case I don’t know what programming language, framework, stack, etc. you’re using. I don’t know anything about your team’s skillset or your organization’s maturity. Any of those factors could play into how I’d recommend you structure the test cases for your team.

      I will say in general, I like it when tests fail for one and only one reason at all levels, UI, Service, and Unit. The benefits I’ve experienced are huge. In the case you described, I’d like to know more about why there wouldn’t be one test to check if login works. That way, when we see that test fail, we know that any other tests that need to login would have failed to. And when we see a sea of red, in the report, we know not to waste our time on the others.

      That likely wouldn’t negate the need for the test you described and it should be easy to write if you’ve already written the one you described.

      What are your thoughts?

      • Hey Paul,
        Sorry for such a late response. I randomly stumbled on your blog today and saw that you had responded to my comment. I don’t think I receive emails when you respond back to me 🙁 Maybe it’s something you can add so that we can have such great conversations?

        Anyways, I definitely hear you about the “always” and “never”. IT is rarely black and white like that. I will definitely have a test for a simple login scenario. What I was suggesting was that if you are attempting to test something beyond the login, say the next page. In this case, this is a different test. The login is already tested in a separate scenario. Now, you need to go through the login to get to the next page, to perform the final validation.

        I’m just asking to better understand. Not questioning your decisions 🙂

        What if the login fails in this test, but you don’t check it, then your failure will be that you tried to interact with an element on the next page, but that didn’t work. Are you suggesting that this is irrelevant because your other test that is validating the login will fail. Hence, providing you the root cause of the problem?
        What if you don’t have the login test automated, would you not want a friendly error message letting you know why the test above failed (the one that logs in and does something else)?

Leave a Reply

Your email address will not be published. Required fields are marked *