7 Common Errors While Automating Tests

QArea Team by QArea Team on March 17, 2014

7 Common Errors While Automating Tests
Reading Time: 3 minutes

Often senior management thinks that automated testing is a sure way to reduce costs and testing efforts as well as increase delivery speed. Automated tests really can trigger a rapid feedback in terms of the system’s health, though there are different automated testing approaches which require more careful management.

Let’s consider the most common errors IT-companies do while incorporating test automation.

1. Comparing automated testing with manual testing

Automated tests cannot replace manual exploratory testing, because achieving high quality and mitigating the risk of defects need the implementation of a combination of testing levels and types. The matter is testing is not only a sequence of repeated actions. Thus, Mike Cohn originally described the triangle of automated testing suggesting that the tests’ investment profile has to concentrate on the unit level to further reduce up to the layers of application.

2. Relying on commercial tools too much

There are a lot of commercial testing tools offering simple features to automate the replay and capture for manual test cases. This approach looks good, but in fact it encourages testing via UI which results in brittle and unmaintainable tests. Again, licensed tools set some cost and restrictions on people accessing their test cases, which is another overhead, and prevent team work and collaboration. Unnecessary complexity is also created due to the fact test cases are stored outside their version control system. An alternative to these is offered by open source tools which usually solve most problems of automated testing and allow test cases to get included in their version control system easily.

3. Executing tests through the UI

While automated UI tests offer a high confidence level, they are fragile to maintain, slow to execute as well as expensive to build. However to encourage developers and testers’ collaboration, increases test execution speed and reduce test implementation costs, the lowest levels should be available to test. Thus, automated unit tests have to accept most test effort before functional, system, integration and acceptance tests. On the other hand, UI based tests can be used in case the UI itself is being tested and in case there are no practical alternatives.

4. Avoiding collaboration in test creation

Defining test cases and executable specifications ensures that all people involved have shared a full understanding of actual requirements that are developed and tested. Though such a practice mostly associates itself with unit testing, it is equally valuable for other types of testing such as acceptance testing.

5. Poor automated test maintenance

Regular tests execution results in a better realization of benefits of test automation like low cost and fast feedback. Regular execution provides the effects like highlighting failures as well as providing continuous feedback as for the system health. If you maintain automated tests manually rather than via the system of continuous integration, there may appear a big risk they fail to be run regularly or fail at all. That’s why it’s essential to execute your automated tests through the system of continuous integration.

6. Frustration with unreliable tests

The major cause why teams lose confidence in and ignore automated tests is unreliable and brittle tests. With the lost confidence the value invested into automated tests initially is dramatically reduced as well. Therefore, for the purpose of eliminating false positives resolving issues concerning brittle tests and fixing failing tests should be the first priority.

7. Hoping to cut costs by automation

While testing tool vendors quite often base their ROI calculations on labor savings solely, such analysis is unreliable since in this way the testing importance, ongoing maintenance costs and investment needed for automation practices prove undervalued.

The above errors and misconceptions of IT managers demonstrate that automated testing has its pitfalls and requires informed management and qualified manual testing.