Developing an Automation Strategy—Where Do We Start?

A simple, step-by-step approach sounds incompatible with an automation strategy, but in agile testing we try to understand the problem first. Deciding where and how to start with automation requires a bit of thought and discussion. As your team looks at testing challenges, you’ll need to consider where automation is appropriate. Before you start searching for a particular automation tool, you’ll want to identify your requirements.

You need to understand what problem you are trying to solve. What are you trying to automate? For example, if you have no test automation of any kind, and you start by buying an expensive commercial test tool thinking it will automate all your functional tests, you may be starting in the wrong place.

We suggest you start at the beginning. Look for your biggest gain. The biggest bang for the buck is definitely the unit tests that the programmers can do. Instead of starting at the top of the test pyramid, you may want to start at the bottom, making sure that the basics are in place. You also need to consider the different types of tests you need to automate, and when you’ll need to have tools ready to use.

In this section, we assume you have automated Quadrant 1 unit and component tests in place, and are looking to automate your business-facing tests in Quadrants 2 and 3, or your Quadrant 4 technology-facing tests that critique the product. We’ll help you design a good strategy for building your automation resources.

Think about the skills and experience on your team. Who needs the automation, and why? What goals are you trying to achieve? Understanding some of these issues may affect your choice of tools and what effort you expend. There is a section on evaluating tools at the end of this chapter.

Automation is scary, especially if you’re starting from scratch, so where do we begin?

Where Does It Hurt the Most?

To figure out where to focus your automation efforts next, ask your team, “What’s the greatest area of pain?” or, for some teams, “What’s the greatest area of boredom?” Can you even get code deployed in order to test it? Do team members feel confident about changing the code, or do they lack any safety net of automated tests? Maybe your team members are more advanced, have mastered TDD, and have a full suite of unit tests. But they don’t have a good framework for specifying business-facing tests, or can’t quite get a handle on automating them. Perhaps you do have some GUI tests, but they’re extremely slow and are costing a lot to maintain.

Peril: Trying to Test Everything Manually

If you’re spending all your time retesting features that you’ve tested before, not getting to new features, and needing to add more and more testing, you’re suffering from a severe lack of test automation. This peril means that testers don’t have time to participate in design and implementation discussions, regression bugs may creep in unnoticed, testing can’t keep up anymore with development, and testers get stuck in a rut. Developers aren’t getting involved in the business-facing testing, and testers don’t have time to figure out a better way to solve the testing problems.

Your team can fix this by developing an automation strategy, as we describe in this chapter. The team starts designing for testability and chooses and implements appropriate automation tools. Testers get an opportunity to develop their technical skills.

Wherever it hurts the most, that’s the place to start your automation efforts. For example, if your team is struggling to even deliver deployable code, you need to implement an automated build process. Nothing’s worse than twiddling your thumbs while you wait for some code to test.

But, if performance puts the existence of your organization in danger, performance testing has to be the top priority. It’s back to understanding what problem you are trying to solve. Risk analysis is your friend here.

Chapter 18, “Coding and Testing,” has more information on a simple approach to risk analysis.

Janet’s Story

I worked on a legacy system that was trying to address some quality issues as well as add new features for our main customer. There were no automated unit or functional tests for the existing application, but we needed to refactor the code to address the quality issues. The team members decided to tackle it one piece at a time. As they chose a chunk of functionality to refactor, the programmers wrote unit tests, made sure they passed, and then rewrote the code until the tests passed again. At the end of the refactoring, they had testable, well-written code and the tests to go with them. The testers wrote the higher-level functional tests at the same time. Within a year, most of the poor-quality legacy code had been rewritten, and the team had achieved good test coverage just by tackling one chunk at a time.

—Janet

Перейти на страницу:

Похожие книги