Sunday 30 January 2011

JQueryUI tabs covering custom menu

On one of the websites I've been recently working on I have a custom expandable menu using javascript. On the same page I use JQueryUI tabs. The menu is placed directly over the tabs container and its items are implemented as list elements (<LI> tags) styled appropriately. The problem was that when menu was expanded some menu items were covered by the tabs so it looked like that:

Menu items covered by JQueryUI tabs
Defining high z-index value for the elements that were covered by the tabs i.e. for the list items didn't help. While searching for correct solution I saw multiple complains about very similar behavior so I decided to post my solution.

Solution
If you encounter a similar problem the solution is quite simple - set the high z-index value not for the menu items but for their parent container! In my case it was the entire Custom Menu div. If it doesn't work with the direct parent work you way up in HTML DOM to find the right one.

Wednesday 26 January 2011

UI Test Automation Best Practices


Below are some important rules that I learned while working on UI Test Automation tasks that can make your tests more reliable and efficient. They may be especially useful if you are working on a data-driven test with many iterations and its execution time may be very long.

1. Test scenarios and data ordering

If you was ever involved in testing you should be familiar with the concept of test scenarios. Each good UI test (automated or not) bases on a scenario. A test scenario should cover all possible situations that need to be tested. In case of test automation performance of the tests depends on how well the scenario is designed. Avoid performing the same operations (logging off/and on, criteria selection etc.) is the key challenge. This usually requires adding some additional logic but the amount of execution time saved is worth it. When working on a scenario for a data-driven test it is also necessary to define the order of the test data to ensure maximum efficiency.

Example:
You need to test a form with 2 cascading drop down lists (i.e. content of the 2nd list depends on the values of the first one). The first list contains countries and the dependant list displays cities in those countries. The form submission should only succeed for 1 specific city for a current user. Here is how your sample test data could look like:

userpasswordCountryCitySuccess
JohnbigmacXXLUSANew York1
HansBratwurstGermanyBerlin1
HansBratwurstUSANew York0
JohnbigmacXXLUSALos Angeles0
HansBratwurstUSALos Angeles0
JohnbigmacXXLGermanyBerlin0
JohnbigmacXXLGermanyMunich0
HansBratwurstGermanyMunich0

A very basic test would log-in the user, set current combination of country-city, check the result and log off the user. Some optimizations to consider here:
  1. Move the log-off step to the beginning. Perform log-off and log-in only if current user differs from the one from previous test. To ensure maximum efficiency order the data set by username.
  2. Before selecting a country ensure it's not already selected. Also, add additional ordering (by country) to ensure minimum reloads of city ddl.
  3. If a successful city selection redirects to another screen and a failed selection simply display an error message on the same screen consider ordering the test data set by test result so the successful submission happens at the very end of the tests for current city for current user. This can minimize to number of screen redirections.
  4. ...

2. Timeouts

When searching for an element of waiting for something to happen you need to define timeout values. This is to ensure that your test ends in a reasonable time even if something goes wrong. If a single iteration reaches any of the timeouts it should be mark as failed and test should continue with the next iteration.

Timeouts values are usually hard to define at the beginning. They depend on many factors like: application type, machine performance, bandwidth etc. Timeouts are usually adjusted after couple first runs on a bigger data-set.

The default timeout values that you can use when designing your test should be a bit higher than a required minimum. If you see after the first test run that too many iterations failed because of the timeouts increase them slightly and re-run the test. The perfect situation is when 100% tests pass and the whole run doesn't take too long. This may be hard to achieve if the tested app and testing environment are not stable enough.

3. Check for existence rather than not existence

Whenever you are thinking about adding a test step that would check if an element doesn't exist consider finding an alternative existence check.

Example:
You want to check that after a button is clicked on a web page the invoked action completes successfully. You can either verify non-existence of an error message or existence of a success confirmation. Both checks require answering some tricky timeout questions (e.g. how long would you wait for the message to appear?). However, verifying non-existence has a serious performance issue - each successful test run would wait for the whole allowed time limit whereas existence check would complete immediately after success confirmation appears.

4. Locate elements wisely

When you create a test there are several ways to locate an element that you need to perform an action on. Some older tools allows you only to move a mouse to a location defined by coordinates e.g. move mouse 100px left anf 50px down from the edge of the screen or browser window. This is not reliable as the coordinates depend on screen resolution, browser window size etc. Current testing tools allow you to locate the desired element using different approaches.

Example:
If you're testing a web page you can identify an element in HTML DOM by tag name or using its attributes (like id, name etc). You can do the same with apps that use XAML (like Silverlight). I don't have experience with testing regular desktop apps but I'm quite sure there is a way to avoid coordinates.

5. Avoid often locating

This one is related to the previous point. Even if you use an reliable method to find an element in GUI don't forget about efficiency. Always try to optimize your search for an element to save some precious time. If possible, try to keep in memory the elements that you often interact with to avoid multiple locations of the same element.

Example:
Let's reuse the example with cascading drop down lists described above. You can locate the first one using any reliable technique (e.g. HTML DOM search). The second one will probably be its sibling or they share a parent indirectly. Us this to locate the 2nd DDL rather than searching through entire DOM again. Once you have them in memory execute actions and checks on them without any additional locating necessary.

6. Avoid pauses

Fixed length pauses will always affect the performance of your test. A tester may think about using a pause when a test step needs to wait for something to happen before it can execute. An alternative to a pause is a "wait-step". Wait-step waits for a condition to be fulfilled. The advantage of this approach is it will only take as much time as required. Also, it may be more reliable because it may wait longer than you would specify a pause for if something takes longer than usual.

Example:
The tested UI contains an animation that normally takes around 2 seconds, but under some circumstances (e.g. slow machine, low bandwidth) can take a bit longer. When using pauses you'd probably define a 3 seconds long one to have some reserve. This would cause each test run to run 1 second longer than required (under normal circumstances). Also, if animation is unusually slow and 3 second is not enough the following test step may fail.

You can eliminate those threats by using a wait-step instead. The challenge here is to define an appropriate condition. Let's say our animation ends with displaying an image on the screen. As a test condition you could use image visibility i.e. wait until image is visible.

7. Hooks in tested apps

It's a commonly accepted practice to include in the application that is being tested some "hooks" for UI tests. Hooks are pieces of code that help invoking some actions by the testing framework. In theory, none of the hooks should be required to complete the tests. An UI test should do exactly the same what an end user would do e.g. move the moue cursor over the button and click it instead of invoking button's click action in code. In practice, there may be some circumstances when using hooks is justified.

Example:
I've been recently working on UI tests for Silverlight app. One of the screens contained a world map for region selection. The regions were not separate GUI elements so it was hard to select an appropriate one with my UI test. The application itself was recognizing which part of the map was clicked basing on some twisted pixel colour logic. With no hooks available I would have to record the mouse click for each region available using coordinates, which is not good at all (see 'Locate elements wisely' point). In addition, defining a new region in app would require adding new coordinates to test.

Instead, I asked developers to include an additional method in code that would allow me appropriate selection using a region name. My testing framework supports executing public method on Silverlight objects. This was just 2 lines of code and didn't introduce any threat. The method was actually doing what a mouse click on a region would cause. Also, the region selection wasn't really in scope of my UI tests but just a step required to move to the screen that needed to be tested.

8. Dynamic URLs

If you are working on a web application tests it is useful to make the url of the tested app configurable. This would allow testing different builds (dev, system-test, live, etc) with the same test script. If your test is data-driven you can specify url in data source as you would do with any other test data.

9. Recovery

If your tests take a long time to complete (e.g. data-driven tests with many iterations) it's a good practice to implement a recovery mechanism. Remember that it is always possible that the tested app or browser window closes unexpectedly. You don't want to find out that the tests you left running for the whole night stopped after 1h because the app crashed. If your testing framework allows that you can check at the beginning of each iteration if the tested app/webpage is available and restart it if required.

10. Logging

Log the results of your tests so you can easily identify reasons for any failures. If you are designing a data driven test it is very useful to have a test summary at the end. Another useful practice is taking browser or desktop screenshots on failure. The screenshots can tell you what went wrong much faster than a complex exception info.

Example:
In the summary part of my data-driven UI tests I always print comma-separated list of IDs of failed tests. After such test completes I can easily copy-paste the ids into my sql query that retrieves test-data and easily re-run only failed test.

11. Success-oriented tests

If you are creating a test that will be executed multiple times using different data remember that for a healthy application and test dataset it should have a high pass rate. As a "pass rate" I mean the percentage of passed iteration/runs. Very often, when working with incomplete target application or test data at the beginning my tests have a low pass rate and take very long time to complete. I'm tempted then to update the test so it performs faster under current circumstances. Rather than doing that you should focus on correcting your test dataset or making appropriate developers improve the target app (e.g. by fixing bugs). Of course introducing tweaks to your test is justified if they will also improve performance in case of complete target app and test dataset.

12. Further Reading

If you're interested in UI Test Automation you can also see my other posts:

Saturday 22 January 2011

What is UI Test Automation about?

In my last post I reviewed a tool for UI Test Automation for Silverlight. For those of you who have never worked on UI tests automation before I decided to explain what it's actually about.

The goal is to create a set of tests that would simulate user interactions with the interface. It's not only about manual actions like mouse click but also about visual verification of the expected result. Example: if a tester clicks a button and visually checks that it caused a message to appear you'd need to create at least 2 steps for that (covering the click and the message check).

The tool I described earlier this week offers an intuitive test recorder that integrates with Internet Explorer. It simply follows user interactions with the webpage and records each manual action as a graphical step. The visual verifications steps need to be added manually. This part requires more caution as it may be crucial for test results. A human would immediately spot an error message appearing on the screen. An automated test will only mark this test as failed if it contains appropriate verification step. Obviously the richer the app is the tricker it gets. A lot of visual effects (animation, popups, drag&drops) can cause you a serious headache ;)

Having a recorder available makes creation of basic tests much easier. Recorded actions are presented as graphical steps in your test project. They can be reordered, reused or combined with other elements (e.g. with if/else logic). More advanced tools offer data binding of the steps without writing any code. Example: let's say your test fills a form and submits it. While recording you provided some data but would like to retest the scenario using different data sets. If you data-bind the test you'll be able to re-run it automatically for each dataset available in source. Available data sources differ between tools but most popular ones are database, excel spreadsheets and xml files.

Although recorder and graphical steps make it easier to start with basic test I found myself creating most of the tests in code. It is possible to convert each graphical step into a coded step as well. This give you more control over the test. Some functionality is usually only available via code because it is almost impossible to implement all the possibilities offered by a programming language in a graphical tool. I can't imagine creating a complex test relying only on options offered by graphical steps. This may be a serious problem for testers with no programming experience.

In my next post I'll try to provide some tips & tricks that I learned while working on UI Test Automation.

PS. The provided examples are usually related to web applications. This is because I mainly work on such apps. However, all the rules mentioned above also apply to desktop applications.

Wednesday 19 January 2011

UI Tests Automation for Silverlight

I'm currently working on a UI test automation task for Silverlight interface. There are not many tools available for that so we decided to evaluate the most popular one i.e. Telerik's WebUI Test Studio. I chose Developer Edition as it easily integrates with Visual Studio. For testers, managers etc there is also a standalone version available.

After a few days of playing with it I can already say it's quite powerful. Once you get familiar with it and learn a few tricks you can easily develop complex UI tests that would save your team a lot of time. Below is a short summary of its pros & cons:

Pros:
  • Supports Silverlight testing

  • Intuitive test recorder

  • For basic tests no coding required (also including logic: if/else, loops)

  • Integrated data access (spreadsheets, csv, database, ...) for data driven tests

  • Multiple video tutorials available

  • Full integration with Telerik's RAD controls

  • Strong community

  • Fast support

Cons:
  • Documentation doesn't cover the entire functionality

  • More complex tests require more coding than recording

  • Support for Silverlight available but yet limited (some additional coding required e.g. when veryfing the content of a ComboBox)

  • Dev & QA Editions differ slightly in available functionality (although I'm not sure that's really a con)

All in all, I would recommend it. I don't think there is really an alternative on the market. Is there?