Wednesday, 26 January 2011

UI Test Automation Best Practices


Below are some important rules that I learned while working on UI Test Automation tasks that can make your tests more reliable and efficient. They may be especially useful if you are working on a data-driven test with many iterations and its execution time may be very long.

1. Test scenarios and data ordering

If you was ever involved in testing you should be familiar with the concept of test scenarios. Each good UI test (automated or not) bases on a scenario. A test scenario should cover all possible situations that need to be tested. In case of test automation performance of the tests depends on how well the scenario is designed. Avoid performing the same operations (logging off/and on, criteria selection etc.) is the key challenge. This usually requires adding some additional logic but the amount of execution time saved is worth it. When working on a scenario for a data-driven test it is also necessary to define the order of the test data to ensure maximum efficiency.

Example:
You need to test a form with 2 cascading drop down lists (i.e. content of the 2nd list depends on the values of the first one). The first list contains countries and the dependant list displays cities in those countries. The form submission should only succeed for 1 specific city for a current user. Here is how your sample test data could look like:

userpasswordCountryCitySuccess
JohnbigmacXXLUSANew York1
HansBratwurstGermanyBerlin1
HansBratwurstUSANew York0
JohnbigmacXXLUSALos Angeles0
HansBratwurstUSALos Angeles0
JohnbigmacXXLGermanyBerlin0
JohnbigmacXXLGermanyMunich0
HansBratwurstGermanyMunich0

A very basic test would log-in the user, set current combination of country-city, check the result and log off the user. Some optimizations to consider here:
  1. Move the log-off step to the beginning. Perform log-off and log-in only if current user differs from the one from previous test. To ensure maximum efficiency order the data set by username.
  2. Before selecting a country ensure it's not already selected. Also, add additional ordering (by country) to ensure minimum reloads of city ddl.
  3. If a successful city selection redirects to another screen and a failed selection simply display an error message on the same screen consider ordering the test data set by test result so the successful submission happens at the very end of the tests for current city for current user. This can minimize to number of screen redirections.
  4. ...

2. Timeouts

When searching for an element of waiting for something to happen you need to define timeout values. This is to ensure that your test ends in a reasonable time even if something goes wrong. If a single iteration reaches any of the timeouts it should be mark as failed and test should continue with the next iteration.

Timeouts values are usually hard to define at the beginning. They depend on many factors like: application type, machine performance, bandwidth etc. Timeouts are usually adjusted after couple first runs on a bigger data-set.

The default timeout values that you can use when designing your test should be a bit higher than a required minimum. If you see after the first test run that too many iterations failed because of the timeouts increase them slightly and re-run the test. The perfect situation is when 100% tests pass and the whole run doesn't take too long. This may be hard to achieve if the tested app and testing environment are not stable enough.

3. Check for existence rather than not existence

Whenever you are thinking about adding a test step that would check if an element doesn't exist consider finding an alternative existence check.

Example:
You want to check that after a button is clicked on a web page the invoked action completes successfully. You can either verify non-existence of an error message or existence of a success confirmation. Both checks require answering some tricky timeout questions (e.g. how long would you wait for the message to appear?). However, verifying non-existence has a serious performance issue - each successful test run would wait for the whole allowed time limit whereas existence check would complete immediately after success confirmation appears.

4. Locate elements wisely

When you create a test there are several ways to locate an element that you need to perform an action on. Some older tools allows you only to move a mouse to a location defined by coordinates e.g. move mouse 100px left anf 50px down from the edge of the screen or browser window. This is not reliable as the coordinates depend on screen resolution, browser window size etc. Current testing tools allow you to locate the desired element using different approaches.

Example:
If you're testing a web page you can identify an element in HTML DOM by tag name or using its attributes (like id, name etc). You can do the same with apps that use XAML (like Silverlight). I don't have experience with testing regular desktop apps but I'm quite sure there is a way to avoid coordinates.

5. Avoid often locating

This one is related to the previous point. Even if you use an reliable method to find an element in GUI don't forget about efficiency. Always try to optimize your search for an element to save some precious time. If possible, try to keep in memory the elements that you often interact with to avoid multiple locations of the same element.

Example:
Let's reuse the example with cascading drop down lists described above. You can locate the first one using any reliable technique (e.g. HTML DOM search). The second one will probably be its sibling or they share a parent indirectly. Us this to locate the 2nd DDL rather than searching through entire DOM again. Once you have them in memory execute actions and checks on them without any additional locating necessary.

6. Avoid pauses

Fixed length pauses will always affect the performance of your test. A tester may think about using a pause when a test step needs to wait for something to happen before it can execute. An alternative to a pause is a "wait-step". Wait-step waits for a condition to be fulfilled. The advantage of this approach is it will only take as much time as required. Also, it may be more reliable because it may wait longer than you would specify a pause for if something takes longer than usual.

Example:
The tested UI contains an animation that normally takes around 2 seconds, but under some circumstances (e.g. slow machine, low bandwidth) can take a bit longer. When using pauses you'd probably define a 3 seconds long one to have some reserve. This would cause each test run to run 1 second longer than required (under normal circumstances). Also, if animation is unusually slow and 3 second is not enough the following test step may fail.

You can eliminate those threats by using a wait-step instead. The challenge here is to define an appropriate condition. Let's say our animation ends with displaying an image on the screen. As a test condition you could use image visibility i.e. wait until image is visible.

7. Hooks in tested apps

It's a commonly accepted practice to include in the application that is being tested some "hooks" for UI tests. Hooks are pieces of code that help invoking some actions by the testing framework. In theory, none of the hooks should be required to complete the tests. An UI test should do exactly the same what an end user would do e.g. move the moue cursor over the button and click it instead of invoking button's click action in code. In practice, there may be some circumstances when using hooks is justified.

Example:
I've been recently working on UI tests for Silverlight app. One of the screens contained a world map for region selection. The regions were not separate GUI elements so it was hard to select an appropriate one with my UI test. The application itself was recognizing which part of the map was clicked basing on some twisted pixel colour logic. With no hooks available I would have to record the mouse click for each region available using coordinates, which is not good at all (see 'Locate elements wisely' point). In addition, defining a new region in app would require adding new coordinates to test.

Instead, I asked developers to include an additional method in code that would allow me appropriate selection using a region name. My testing framework supports executing public method on Silverlight objects. This was just 2 lines of code and didn't introduce any threat. The method was actually doing what a mouse click on a region would cause. Also, the region selection wasn't really in scope of my UI tests but just a step required to move to the screen that needed to be tested.

8. Dynamic URLs

If you are working on a web application tests it is useful to make the url of the tested app configurable. This would allow testing different builds (dev, system-test, live, etc) with the same test script. If your test is data-driven you can specify url in data source as you would do with any other test data.

9. Recovery

If your tests take a long time to complete (e.g. data-driven tests with many iterations) it's a good practice to implement a recovery mechanism. Remember that it is always possible that the tested app or browser window closes unexpectedly. You don't want to find out that the tests you left running for the whole night stopped after 1h because the app crashed. If your testing framework allows that you can check at the beginning of each iteration if the tested app/webpage is available and restart it if required.

10. Logging

Log the results of your tests so you can easily identify reasons for any failures. If you are designing a data driven test it is very useful to have a test summary at the end. Another useful practice is taking browser or desktop screenshots on failure. The screenshots can tell you what went wrong much faster than a complex exception info.

Example:
In the summary part of my data-driven UI tests I always print comma-separated list of IDs of failed tests. After such test completes I can easily copy-paste the ids into my sql query that retrieves test-data and easily re-run only failed test.

11. Success-oriented tests

If you are creating a test that will be executed multiple times using different data remember that for a healthy application and test dataset it should have a high pass rate. As a "pass rate" I mean the percentage of passed iteration/runs. Very often, when working with incomplete target application or test data at the beginning my tests have a low pass rate and take very long time to complete. I'm tempted then to update the test so it performs faster under current circumstances. Rather than doing that you should focus on correcting your test dataset or making appropriate developers improve the target app (e.g. by fixing bugs). Of course introducing tweaks to your test is justified if they will also improve performance in case of complete target app and test dataset.

12. Further Reading

If you're interested in UI Test Automation you can also see my other posts:

No comments: