For an average guy making it in IT, I’m sure you’ve barely seen a project with a budget approved for automation testing. Because sales is primarily concerned with closing the deal no matter what, the quoted project price is slashed in the blink of an eye by eliminating nice to have automation testing.
If we’re talking about a startup or a smaller project, this decision can be perfectly fine, but for any project lasting more than six months, it might not be. In the case of a project that has multiple user scenarios operating the application, new feature developments or change requests will naturally cause an increase in regression testing. Ignoring automation testing is a cause of great concern if the project is put on a more aggressive schedule with a staging environment available to end users.
The automation testing market size is expected to grow from $20.7 billion to $49.9 billion by 2026, growing at a CAGR of 19.2% during the forecast period . Even if we use a more modest forecast of 14.2% CAGR reported by Mordor Intelligence, it is still a definite and palpable increase. This rise is owed to the increase in the adoption of automated testing methods among enterprises and SME’s.
We can even see some new trends like smart automation testing, where your codebase is scanned for changes and only tests relating to those changes are executed, a gradual move towards helper frameworks that enable you to write codeless automation tests (AT’s) by recording desired application scenarios, to implement cloud based cross-browser testing as well as AT integration into CI/CD loop.
Most of these trends gravitate towards some quality of life improvement, but only after you have finished writing your AT suite. I’d love to talk about the most essential step after writing automation tests, which is the integration of the testing cycle into the CI/CD pipeline, but we have to start things from the beginning.
Training of new automation testers
In our experience, onboarding new personnel to quickly write their first automation test (and from there on the rest of them) has never been a problem. The amount of time that an individual needs to get familiar with the framework and write their first test is the only variable.
The challenge was to get them to quickly grasp the concept of a good test.
Onboarding a person with a QA background is challenging from a software development perspective. They are familiar with the concepts and approaches to testing, but it is difficult to quickly teach them coding strategies and design patterns. Onboarding a developer tends to be more difficult, on the other hand, from a testing theory perspective, especially if they have not been involved in projects requiring large unit testing code coverage.
A knowledge gap can be conquered with time and effort, but as a rule, it is easier to teach good testing fundamentals and their applications to a software engineer than for a QA specialist to grasp how to write good code. With that rule in mind, let’s quickly cover the fundamentals of good testing.
What is a good test?
Let’s start with a simpler question: what is a test?
“A unit test is a piece of code (usually a method) that invokes another piece of code and checks the correctness of some assumptions afterward. If the assumptions turn out to be wrong, the unit test has failed. A unit is a method or function.” - The Art of Unit Testing Second Edition
We can easily refine and adapt this general definition to automation testing.
Automation testing of a task invokes a piece of code interacting with the UI and checks one specific end result of that task. If the assumption turns out to be wrong, the automation test has failed. It is consistent in its results as long as the production code has not changed. It is readable, maintainable, and trustworthy.
I will highlight the important changes.
- Checking one specific end result of a task - We reject complex tests in favor of many simpler ones. Every test should have a single responsibility for testing.
- It's consistent in its result as long as the code has not changed - Also known as the definition of a stable test. Unstable/flaky tests will sometimes pass and sometimes fail and thus should be omitted from the test suite.
The final three qualities deserve their own chapter.
Readable Tests
A key trait of any good test is that it should be easy to read, and the intent of the test author should be clear. To achieve readability, the most important thing is to have a good test name, but it also helps to separate assertions from actions, to have good variable naming, and to eliminate magic values.
Roy Osherove, the author of The Art of Unit Testing, talks about three important pieces of information that always need to be present in the test name:
- The name of the feature being tested
- The scenario under which you are testing
- The expected behavior
If you want to test that entering an incorrect password on the login page causes an alert, use the names LoginPage_Password_Should_AlertIfInvalid()
or LoginPage_When_UsingIncorrectPassword_Alert_Should_BeDisplayed()
.
In the context of automation testing, the most important magic values you should stop yourself from using are field locators.
Trustworthy Tests
If you can’t trust your tests, you can’t run your tests
It’s as simple as that. Any test which sometimes passes and sometimes fails due to factors not controlled by the test itself, you will want to exclude from your testing suite.
Once you make your tests a part of your build or deploy pipeline, every time your developer sees a 100% pass rate, he should be happy and confident that his new piece of code is working as expected and did not break anything that previously used to work.
The best way to achieve trustworthiness is to write short, readable tests that focus on testing one specific thing. Troubleshooting a suspicious test is very straightforward. You carefully debug the test to ensure you did not make any mistakes during the test process, and if it still fails, then good job. You have just identified a bug in production. If it passes after fixing, make sure it consistently produces the same results before marking it as fixed.
Maintainable Tests
In the field of automation testing, the main cause of test changes would be UI changes. This would mean some of your locators are no longer correct. Sometimes this is unavoidable, but by proper communication with the front-end engineer, you can ensure that this does not happen often.
Use ID to find page elements in the case of a web app. If ID is not available, make sure you have specific classes on key elements that you can easily target. Agreed element classes should always be kept, even in the case of a frontend refactoring. If you have managed to restrain yourself from using magic values for element locators, then you probably have them listed in a single place. You can always critically look at your locator code and identify what can be improved.
To improve the maintainability of tests, you can use good code writing practices, but this should be something that you constantly research on your own and not the topic of this article.
Conclusion
Readable tests are trustworthy because it is immediately apparent what they are doing. That is exactly what makes them maintainable. This means that readability is a key trait of a good test, even if that wasn’t apparent at first glance. When onboarding new automation testing engineers, you should ensure they have a good grasp of these fundamentals. It's easy to go along with the “agile” way of development and throw them into the fire to make them write their first test as soon as possible, but what good will it do if you end up with an unusable test suite that doesn’t help your development cycle in the end?
In the next article, I will cover the inclusion of tests in the CI/CD pipeline.