Tuesday 29 November 2011

Easymock - create partial mocks

EasyMock is a very useful tool allowing to mock all objects that the tested unit of code depends on. This is of course assuming you write your code in the way that allows that e.g. using dependency injection pattern.

It is also possible to mock only a part of the object e.g. single method and leave original implementation for the remaining part. Such object is called a 'partial mock'.

The following Java code presents how to create such partial mock:
package com.blogspot.fczaja.samples

import org.easymock.EasyMock;
import org.junit.Test;

public class PartialMockTests
{
class PartialMock
{
void foo()
{
// Code inside foo() will not be invoked while testing
System.out.println("foo");
}

void boo()
{
// Code inside boo should be invoked
System.out.println("boo");
foo();
}
}

@Test
public void testPartialMock()
{
PartialMock partialMock = EasyMock
.createMockBuilder(PartialMock.class) //create builder first
.addMockedMethod("foo") // tell EasyMock to mock foo() method
.createMock(); // create the partial mock object

// tell EasyMock to expect call to mocked foo()
partialMock.foo();
EasyMock.expectLastCall().once();
EasyMock.replay(partialMock);

partialMock.boo(); // call boo() (not mocked)

EasyMock.verify(partialMock);
}
}
When executing our test the method foo() of the object will be mocked and method boo() will be invoked normally. The console output of that test would be:
boo

I'm using this technique when I want to test a single method, that calls other methods in the same class. I can mock all other methods so they behave like methods form other mocked objects.

Friday 4 November 2011

The data set DataSet1 is a shared dataset. SQL Server 2008 Reporting Services does not support shared data sets

I was working on a SSRS project using Sql Server 2008 but decided to upgrade to R2 version, so I can share commonly used datasets across all reports. After the upgrade I converted my SSRS project to the new version and converted existing datasets to shared. When I try to build the project I got the following error:

The data set, DataSet1, is a shared dataset. SQL Server 2008 Reporting Services does not support shared data sets

Hmm, did the upgrade or conversion fail?

Solution
It came out you need to update one of the project settings called: TargetServerVersion. To do that right click your SSRS project, select Properties and search for that setting:


The correct value to be set is "Sql Server 2008 R2".

Monday 19 September 2011

REST with SpringMVC - Passing params in request body




Lately I was trying to pass parameters to a SpringMVC REST service. In order to do that I used @RequestParameter annotation in a method implementing my service.

Sample service method for updating user email could look as follows:
@RequestMapping(method=RequestMethod.POST, value="/user/{username}")
public String updateUser(HttpServletResponse response,
@PathVariable("username") String username,
@RequestParameter("email") String email)
{
// UPDATE USER EMAIL HERE
}
As you can see expected request parameter is defined as method parameter and annotated with @RequestParameter specifying the parameter name. Originally it seemed that this approach only works when parameters are passed as URL params but doesn't when params are passed in request body i.e. the following HTTP request would work:
POST http:///user/?email=test%40example.com HTTP/1.1
Host:
(...)
whereas the following would not:
POST http:///user/ HTTP/1.1
Host:
(...)

email=test%40example.com

When I start googling for this issue I found several opinions stating that this is a known bug and suggesting some workarounds e.g. using @RequestBody annotation and manually extracting parameter values from the body string.

However, the issue disappears if you specify the content type as one of the request headers:
Content-Type: application/x-www-form-urlencoded
After I added this to my http request both url and body params are captured with @RequestParameter annotation.

Cross-domain Single Sign On with OpenAM

OpenAM is an open-source solution for access management i.e. authentication, authorization and more. It's maintained by ForgeRock, which took over the project after Sun abandoned it. When led by Sun it was called OpenSSO.

I was recently responsible for installation & configuration of OpenAM. We use it at one of the project to provide cross-domain Single Sign On (CDSSO). At first it seemed to be a complex but relatively straightforward task but as it came out later on, it can give you a serious headache when you try to achieve smth different than default.

Below is the short summary of pros & cons:

Pros:
  • It's a quite mature solution that is built upon its ancestor OpenSSO
  • Experienced users can benefit from its reach configuration options
  • Built-in support for multiple user data stores (LDAP, db, ...).
  • Out-of-the-box support for SAML2 protocol
  • Portability (100% java)
  • Built-in support for multi-instance configuration (for Load Balancing)
Cons:
  • Very poor documentation - most of the information about the product installation and configuration is available at the Wiki page in form of short, informal articles. Most of the useful information you find on the old OpenSSO specification pages hosted by Sun so you can never be sure if that info is still relevant with the latest version of OpenAM.
  • No community - there is actually no real community of people using that solution. This means there is no fora you can search for advise. There is only an oldschool mailing list with very limited usability
  • Not that flexible - although quite complex configuration is available sometimes I felt limited, especially when trying to implement smth different than defualt e.g. custom login screens.
As you have probably noticed I got a bit frustrated about the "Cons" and described them in much more details than "Pros" ;) I'm not saying it's a bad product but it certainly requires a lot of experience & knowledge of its features. The most painful part is the lack of decent documentation. We even got ourselves this Book but it covers only basic topics.

Be aware that doing anything different than default may require some custom tweaks or not even be possible. If you plan to implement something that is not described in basic tutorials consider other solution first.

If you want to use SAML 2.0 functionality offered by OpenAM I would recommend you to read my tutorial on how to achieve IdP initiated SSO and Identity Federation with OpenAM and SAML.

Here are some links to other useful resources:

  1. OpenAM wiki
  2. Different deployment options
  3. Troubleshooting OpenAM (recommended!)

Tuesday 13 September 2011

STS - Waiting for changelog lock...

Problem:

When starting TC server that comes with SpringSource Tool Suite (STS) I'm getting the following message and the serer doesn't start:
"Waiting for changelog lock..."


Solution:

Delete folder:
$TCSERVER_HOME/spring-insight-instance/insight/data
and retry. That's it.

Friday 12 August 2011

DziennikLotow.pl

This one is mainly for my Polish readers:

Z wielką przyjemnością pragnę poinformować o powstaniu nowego, darmowego narzędzia do zarządzania swoimi lotami online.
Serwis Dziennik Lotów umożliwia tworzenie historii swoich lotów, generowanie interaktywnej mapy połączeń i wiele innych!

Główne funckjonalności serwisu
  • Intuicyjne zarządzanie lotami: dodawanie, import, edycja, kasowanie
  • Interaktywna mapa lotów
  • Intuicyjny interfajs do szybkiego zapisywania lub importu lotów
  • Wyczerpujące statystyki zilustrowane wykresami
  • Baza danych ponad 10000 lotnisk!
  • Integracja z serwisem Facebook
  • Wszystko za darmo!

Mamy nadzieję, że nasz serwis zostanie mile przyjęty przez społeczność pasażerów i podróżników.

Konstruktywne komentarze mile widziane!

Tuesday 12 July 2011

PHP: How to send a POST request with parameters

 

The following piece of PHP code shows how to send a POST request to a website passing some requests parameters. It may be useful if you needed to process the page that is normally requested using POST method e.g. form submission result page.

The request is similar to what your browser would send if you populated a form using POST method on a webpage.
// Create map with request parameters
$params = array ('surname' => 'Filip', 'lastname' => 'Czaja');

// Build Http query using params
$query = http_build_query ($params);

// Create Http context details
$contextData = array (
'method' => 'POST',
'header' => "Connection: close\r\n".
"Content-Length: ".strlen($query)."\r\n",
'content'=> $query );

// Create context resource for our request
$context = stream_context_create (array ( 'http' => $contextData ));

// Read page rendered as result of your POST request
$result = file_get_contents (
'http://www.sample-post-page.com', // page url
false,
$context);

// Server response is now stored in $result variable so you can process it

Monday 11 July 2011

Error starting Tc server in STS 2.7

So I started learning Spring MVC by example using the Spring MVC Showcase. I downloaded STS and cloned the GIT repo to get the local copy of the code. I loaded the Maven project and built it successfully.

When I tried to start the VMware vFabric tc Server Developer Edition 2.5 I got the following exception:
Publishing the configuration...
Error copying file to C:/Program Files/springsource/vfabric-tc-server-developer-2.5.0.RELEASE/spring-insight-instance/backup\catalina.policy: C:\Program Files\springsource\vfabric-tc-server-developer-2.5.0.RELEASE\spring-insight-instance\conf\catalina.policy (The system cannot find the path specified)
C:\Program Files\springsource\vfabric-tc-server-developer-2.5.0.RELEASE\spring-insight-instance\conf\catalina.policy (The system cannot find the path specified)
Error copying file to C:/Program Files/springsource/vfabric-tc-server-developer-2.5.0.RELEASE/spring-insight-instance/backup\catalina.properties: C:\Program Files\springsource\vfabric-tc-server-developer-2.5.0.RELEASE\spring-insight-instance\conf\catalina.properties (The system cannot find the path specified)
C:\Program Files\springsource\vfabric-tc-server-developer-2.5.0.RELEASE\spring-insight-instance\conf\catalina.properties (The system cannot find the path specified)
Error copying file to C:/Program Files/springsource/vfabric-tc-server-developer-2.5.0.RELEASE/spring-insight-instance/backup\context.xml: C:\Program Files\springsource\vfabric-tc-server-developer-2.5.0.RELEASE\spring-insight-instance\conf\context.xml (The system cannot find the path specified)
C:\Program Files\springsource\vfabric-tc-server-developer-2.5.0.RELEASE\spring-insight-instance\conf\context.xml (The system cannot find the path specified)
Error copying file to C:/Program Files/springsource/vfabric-tc-server-developer-2.5.0.RELEASE/spring-insight-instance/backup\jmxremote.access: C:\Program Files\springsource\vfabric-tc-server-developer-2.5.0.RELEASE\spring-insight-instance\conf\jmxremote.access (The system cannot find the path specified)

(...)

Solution

I'm running 64-bit version of STS on Windows 7. By default programs don't use the Admin account. It was enough to run the STS as Admin (Right click on shortcut -> "Run as Administrator").
That's it! Simple, isn't it? :)

 

Getting started with Spring MVC and Hibernate

Soon I'll be joining a new project using mainly Spring MVC + Hibernate. Since I've never used those technologies and know only their general purpose I need to do some reading.

Here are the links that were recommended to me:Would you recommend any others?

Thursday 7 July 2011

CodeIgniter: Resetting form validation

 

In one of my php projects I'm using CodeIgniter and its Form Validation library. I have validation rules defined in a config file located at:

system\application\config\form_validation.php

Sample rules definition for action "item/add" could look as follows:
$config = array(
'item/add' => array(
array(
'field' => 'name',
'label' => 'lang:name',
'rules' => 'trim|xss_clean|required|max_length[50]'
),
array(
'field' => 'type',
'label' => 'lang:type',
'rules' => 'trim|xss_clean|required|callback_type_check'
)
),
(...)
As you can see I'm using both built-int and custom rules (callback_type_check).

This works fine with my 'Add Item' form.

However, I wanted to reuse the validation logic at other place, where the user can provide multiple items to add at once in a file where each row represents a single item. So I read line by line from the file and want to validate each line. To do that I reset values in $_POST array and perform validation:
$_POST["name"] = $nameReadFromFile;
$_POST["type"] = $typeReadFromFile;
if ($this->form_validation->run('item/add') == FALSE) {
// handle validation error for current item
}
The problem is that when validation fails for one item then the error is persisted and all following invocations will fail as well, even if items are valid.

So, I added a reset function to my controller that resets Form Validation library:
function _reset_validation()
{
// Store current rules
$rules = $this->form_validation->_config_rules;

// Create new validation object
$this->form_validation = new CI_Form_validation();

// Reset rules
$this->form_validation->_config_rules = $rules;
}
The function simply remembers the rules that were loaded from config file when validation object was created, creates a new validation object and resets the rules. I call it after each row is validated.

Tuesday 21 June 2011

Asp.Net: Handle empty list in Repeater



Repeater in Asp.Net Webforms is a commonly used user control for presenting lists data. It's quite handy but it lacks support for empty lists.

Below is a simple workaround:









<asp:Label ID="lblEmptyList"
runat="server"
Text="The list is empty"
Visible='<%#bool.Parse((RptrMyList.Items.Count==0).ToString())%>'>
</asp:Label>

Thursday 16 June 2011

Asynchronous calls to WCF service from Asp.Net

One of the functionalities I'm currently working on is document upload. After the document is uploaded to the server via web interface (Asp.Net Webforms) it is passed to a WCF web service which processes it and returns the response. For the whole time the user interface is locked and the end user waits for upload confirmation.

This typical synchronous scenario may be very frustrating for the users because they are blocked until the operation completes. The bigger the file to process the worse it gets.

We decided to change that so an asynchronous upload is used: Once the file is sent to the server the confirmation is immediately displayed to user. The confirmation states only that the upload process was successfully started and the user can continue working with the web app while the file is processed.

In such scenario you'll also need to display upload results at some stage. There are many possible options for displaying the final operation results (e.g. ajax calls combined with some popups, additional tab etc.). This part is not covered in this post.

Implementation:

Because the file is processed by a web service I wanted to use a WCF mechanism for asynchronous service calls. The mechanism is quite easy to use. When you generate the service proxy using Visual Studio select to "Generate asynchronous operations" (under "Advanced" options). This will add additional "Async" method for each operation and Completed event. All you need to do is set the handler for Completed event and call the Async method using generated client e.g.
client.UploadDocumentCompleted +=
new EventHandler<UploadDocumentCompletedEventArgs>(UploadDocumentCallback);
client.UploadDocumentAsync(fileToUpload);

In order to make this work on your Aspx page you'll need to add Async="True" to you page directive (see my other post for details).

Some useful links on how to call a WCF service asynchronously:

Problem:

So I implemented my async service call in code behind of my Aspx page and then it came out it's no good in my case. I was expecting that after the upload operation of the target WCF service is called my page will return response to the user and the UI will not be blocked anymore. It came out that although the service was called asynchronously the page still waits until the operation completes before sending response to the user.

I started to search for the reason of such behaviour and stumbled upon this article. It explains the concept of asynchronous service call from Asp.net pages. It works different than I expected: the async operation must complete before Page's PreRenderComplete event so the page waits for the service call results. It still allows you to boost performance (e.g. by releasing threads to the pool) but not in the way I needed.

Workaround - starting threads manually:

Because the async service proxy didn't solve my problem I decided to implement a workaround. When I need to call the upload operation of the service I create a new thread and call the service within that thread. Since the service is called in a new thread the Page doesn't wait for the service operation to complete.

Sample class for the upload thread:
public class UploadThread
{
private byte[] _byteArray;

public UploadThread(byte[] byteArray)
{
_byteArray = byteArray;
ThreadPool.QueueUserWorkItem(this.Run);
}

protected void Run(object obj)
{
try
{
MyServiceClient client = new MyServiceClient();
client.UploadDocument(_byteArray);
client.Close();
}
catch (Exception e)
{
// Handle exception here
}
}
}
You pass the doc content in the constructor and it starts a new thread that invokes the "Run" method. That method calls the service (synchronously in my case). To be more exact the thread that processes this task is taken from the ThreadPool (see line 8).

To start the thread simply call the following code from the code behind your aspx page:
new UploadThread(fileToUpload);
where fileUplaod is an array of bytes representing the doc content.

One thing to note here is that the new thread will not have direct access to HttpContext of the page. If you needed this in your thread simply pass it in constructor.

Further enhancements:

Thinking about further enhancements I decided to configure the service operation to be One Way. It means that after the client calls the service it doesn't wait for the service response. This will cause that the upload thread will be released to the pool earlier.
[OperationContract(IsOneWay = true)]
void UploadDocument(byte[] byteArray);

An additional performance enhancement may be using streams instead of byte arrays when passing documents to WCF service. I couldn't implement this in my case because of other limitations but you can find a nice example of this here.

Wednesday 15 June 2011

WCF: How to increase request size limit

When you generate a new WCF web service using Visual Studio it will use "wsHttpBinding" with its default configuration. Default endpoint configuration generated for a new web service looks like that:
<endpoint
address=""
binding="wsHttpBinding"
contract="Your.Namespace.IService1" />

This will work well in most cases at development stage. However, before deploying the web service you should consider changing the default settings according to your needs (e.g. change security settings).

The problem you may often get already at development stage is the following error thrown when you send large amount of data to your service (e.g. a large file):

The remote server returned an unexpected response: (400) Bad Request

The request fails because it's size exceeds the limit allowed by your service. The default request size limit is 65536 bytes (i.e. 64KB). In order to change this limit you need to use custom binding configuration. The following example sets the request size limit to 50MB (i.e. 52428800 bytes) using custom binding configuration:
<endpoint
address=""
binding="wsHttpBinding"
bindingConfiguration="myCustomConf"
contract="Your.Namespace.IService1" />
<bindings>
<wsHttpBinding>
<binding name="myCustomConf"
maxReceivedMessageSize="52428800"
maxBufferPoolSize="52428800" >

(...)
</binding>
</wsHttpBinding>
</bindings>

In example above I used custom binding configuration named 'myCustomConf'. You can read more about wsHttpBinding properties here.

The code above cares about configuration of your WCF service. However, you'll most likely need to increase the request size limit for Asp.NET runtime as well:
<system.web>
<httpRuntime maxRequestLength="51200" />
(...)
</system.web>

Note that Asp.Net maximal request size is set in KB whereas WCF configuration uses bytes! So, in order to allow requests of size 50MB you'll need to set value of 51200KB.

Thursday 9 June 2011

Convert Excel date into timestamp

Excel stores dates internally as number of days since January 1, 1900.
For example: "June 9th, 2011 10:30 AM" would be stored as "40703.4375".
40703 is the number of full days since 01/01/1900 and 0.4375 represents the time (10.5/24 = 0.4375).

When you process dates read from an Excel spreadsheet (e.g. using PHPExcel) you often want to convert them into a UNIX timestamp i.e. a number of seconds elapsed since midnight of January 1, 1970 UTC.

Here is a PHP code to do that:

// Numbers of days between January 1, 1900 and 1970 (including 19 leap years)
define("MIN_DATES_DIFF", 25569);

// Numbers of second in a day:
define("SEC_IN_DAY", 86400);

function excel2timestamp($excelDate)
{
if ($excelDate <= MIN_DATES_DIFF)
return 0;

return ($excelDate - MIN_DATES_DIFF) * SEC_IN_DAY;
}
Although the code above is written in PHP the function should be very similar in any other language e.g. C# or java. If the provided date is earlier than 1/1/1970 then the minimal timestamp value will be returned.

Alternative solution:

If you provide the Excel spreadsheet that you later on read from in your app you could add a hidden cell that would calculate the timestamp for you, within the spreadsheet.
Assuming that B2 is the cell that stores your date the formula for calculating timestamp would be:
=(B2-DATE(1970,1,1))*86400
Now you only need to read the calculated value from the hidden cell.

Wednesday 8 June 2011

Localized error messages

If you are not a native English speaker it is very likely that you're using a localized operating system. I'm currently working on Windows 7 Enterprise with system language set to Polish.

The disadvantage of this is that when I develop any .net code on my machine and an error occurs the error message I'm getting is localized. This is supposed to help me understand the error cause but it's actually doing the exact opposite. After an error message is translated into Polish it's usually totally meaningless to me. This is mainly because most of documentation is written in English and many technical terms are simply hard for direct translation.

Also, when I search the web for the error message I got there are obviously much more results for the English version.

Another downside is that I can't directly share the error message with my foreign team mates as they wouldn't understand it.

Lately I discovered a web page that can unlocalize the message for me: www.unlocalize.com. It allows you to search for the message or browse the catalog. It also offers some browser plugins for faster unlocalization.

Asynchronous operations are not allowed in this context

While calling a WCF service asynchronously from 'Code Behind' of my .Aspx page I got the following error:

Asynchronous operations are not allowed in this context. Page starting an asynchronous operation has to have the Async attribute set to true and an asynchronous operation can only be started on a page prior to PreRenderComplete event.

Solution:


Add Async="true" to you Page directive i.e.
<%@ Page Language="C#" Async="true" ...

You can read more about Async attribute HERE.

BTW. The original error message was in Polish as I use localized operating system:

Operacje asynchroniczne nie są dozwolone w tym kontekście. Strona rozpoczynająca operację asynchroniczną musi mieć atrybut Async o wartości True. Operację asynchroniczną można uruchomić na stronie tylko przed zdarzeniem PreRenderComplete.

I translated it using the tool descriebd in my next post.

Wednesday 25 May 2011

The report server cannot decrypt the symmetric key that is used to access sensitive or encrypted data in a report server database.

I got the following error when trying to use SQL Server Reporting Services manager:

The report server cannot decrypt the symmetric key that is used to access sensitive or encrypted data in a report server database. You must either restore a backup key or delete all encrypted content. (rsReportServerDisabled) Get Online Help
Bad Data. (Exception from HRESULT: 0x80090005)

Solution:

You will need to reset your encryption keys. To do that:
  1. Open Reporting Services Configuration Manager and select 'Encryption keys' tab

  2. Click 'Delete' in 'Delete Encrypted Keys' section:
    SQL Server Reporting Services Configratuin Manager
  3. You will need to reset all connection strings and db credentials for all your data sources

Friday 8 April 2011

Dynamic parameter set for a stored procedure

Lately I was trying to figure out the best way to pass dynamic number of parameters to a stored procedure. The stored procedure was supposed to do a simple SELECT using parameters provided. The problem was that the set of parameters could change dynamically, depending on app configuration.

There are several ways to achieve that. My first choice was to use a single parameter that would contain all serialized search terms. The terms would be extracted from that param within my stored procedure. Then, I would build the query dynamically using those terms:
(...)
BEGIN
IF @forename IS NOT NULL
BEGIN
SET @sqlQuery = @ sqlQuery + ' AND forename LIKE ''' + @forename +'%''';
END
END
BEGIN
IF @surname IS NOT NULL
BEGIN
SET @ sqlQuery = sqlQuery + ' AND surname LIKE ''' + @surname +'%''';
END
END
(...)
The problem with that approach is that building dynamic sql is generally slow. I was advised to try out an alternative solution: pass all possible search terms as separate parameters and use a static query with null checks:
(...)
AND (@forename IS NULL OR forename LIKE @forename+'%')
AND (@surname IS NULL OR surname LIKE @surname+'%')
(...)
I wasn't sure which one is better, especially in the case when there are many possible search terms. The first approach uses only specified terms but it takes time to build the query. In second approach the query is static but there are multiple null checks which also takes time (the more possible criteria there is, the worse it gets).

I did some benchmarking using Sql Server 2008 and a table with 1000 records. It came out that the approach with dynamic sql was slightly faster. I ran my tests around 10 times for each params set and then calculated the average duration (not counting values that were far from avg).

Example: Below are the results when 2 params were set. Depending on the param values different results number was returned:


Dynamic SQL [ms]Static SQL [ms]
Params returning around 250 results217241
Params returning around 15 results1024
For other combination of params (e.g. 3 other params returning similar number of results) the duration differences were similar. This shows that from performance perspective dynamic sql is better than the second approach, especially if you have many possible search criteria.

For now i didn't come up with a better approach. Any suggestions?

Monday 14 March 2011

SSRS - dataset element as parameter

Recently I worked on a SSRS (SQL Server Reporting Services) report that used 2 datasets, each using a different data source (see picture below). The dataset "DataSet2" was using a store procedure to retrieve the desired info. The problem was that as the parameter for this stored procedure I needed to use the value of the field "Field1" from dataset "DataSet1".

When trying to compile such report I got the following error:

The expression used for the parameter ‘@param1’ in the dataset ‘@param1’ refers to a field. Fields cannot be used in query parameter expressions.

2 SSRS datasets with different datasources
It came out that such construction is not allowed. I'm guessing it's because you cannot define the order in which the dataset are evaluated.

To achieve such dependency you need to use a sub-report:
  1. Create a new report that will be used as subreport

  2. Move the dataset "DataSet2" to your new report

  3. Move all the report content that depends on "DataSet2" to your new report

  4. Add your new report to the original one as subreport (see how)

  5. Define new subreport parameter that will be using value of the "Field1" from "DataSet1"

  6. Adjust the "DataSet2" in your subreport to use the new subreport param

Wednesday 9 February 2011

Redirect to different domain in .htaccess

If you own several domains pointing to the same page and would like to enforce users to always use the same domain you can use apache mod_rewrite by editing .htaccess file. Let's say you own
  • http://mypage.com
  • http://my-page.com
  • http://www.mypage.com
  • http://www.my-page.com
but you want that everybody uses only the http://mypage.com.
Here is how your .htaccess file could look like:
RewriteEngine On
rewritecond %{http_host} ^my-page\.com [nc,OR]
rewritecond %{http_host} ^www\.my-page\.com [nc,OR]
rewritecond %{http_host} ^www\.mypage\.com [nc]
rewriterule ^(.*)$ http://mypage.com/$1 [r=301,nc]

This may be useful if you want that your page is always associated with the same, unique address.
This mechanism is commonly used by webpage owners who don't want other people to earn on popularity of their pages and still their traffic. They simply buy similar domains (like in our example) and redirect them all to their main domain.

Wednesday 2 February 2011

.NET Reflector not free anymore

For those of you who use RedGate's .NET Reflector I have a sad news. I just received an email stating that they are changing their policy and .NET reflector won't be a free tool anymore :(
Starting from version 7 it will cost you 35$ for a perpetual license. Apparently they can afford working on this for free. You can read more on this decision here.

What is .NET Reflector?
For those of you who doesn't know the tool - it's a very good class browser, analyzer and decompiler for .NET. I use it quite often to decompile dlls if I need to check what's actually in there.

Can you recommend a similar free tool?

Sunday 30 January 2011

JQueryUI tabs covering custom menu

On one of the websites I've been recently working on I have a custom expandable menu using javascript. On the same page I use JQueryUI tabs. The menu is placed directly over the tabs container and its items are implemented as list elements (<LI> tags) styled appropriately. The problem was that when menu was expanded some menu items were covered by the tabs so it looked like that:

Menu items covered by JQueryUI tabs
Defining high z-index value for the elements that were covered by the tabs i.e. for the list items didn't help. While searching for correct solution I saw multiple complains about very similar behavior so I decided to post my solution.

Solution
If you encounter a similar problem the solution is quite simple - set the high z-index value not for the menu items but for their parent container! In my case it was the entire Custom Menu div. If it doesn't work with the direct parent work you way up in HTML DOM to find the right one.

Wednesday 26 January 2011

UI Test Automation Best Practices


Below are some important rules that I learned while working on UI Test Automation tasks that can make your tests more reliable and efficient. They may be especially useful if you are working on a data-driven test with many iterations and its execution time may be very long.

1. Test scenarios and data ordering

If you was ever involved in testing you should be familiar with the concept of test scenarios. Each good UI test (automated or not) bases on a scenario. A test scenario should cover all possible situations that need to be tested. In case of test automation performance of the tests depends on how well the scenario is designed. Avoid performing the same operations (logging off/and on, criteria selection etc.) is the key challenge. This usually requires adding some additional logic but the amount of execution time saved is worth it. When working on a scenario for a data-driven test it is also necessary to define the order of the test data to ensure maximum efficiency.

Example:
You need to test a form with 2 cascading drop down lists (i.e. content of the 2nd list depends on the values of the first one). The first list contains countries and the dependant list displays cities in those countries. The form submission should only succeed for 1 specific city for a current user. Here is how your sample test data could look like:

userpasswordCountryCitySuccess
JohnbigmacXXLUSANew York1
HansBratwurstGermanyBerlin1
HansBratwurstUSANew York0
JohnbigmacXXLUSALos Angeles0
HansBratwurstUSALos Angeles0
JohnbigmacXXLGermanyBerlin0
JohnbigmacXXLGermanyMunich0
HansBratwurstGermanyMunich0

A very basic test would log-in the user, set current combination of country-city, check the result and log off the user. Some optimizations to consider here:
  1. Move the log-off step to the beginning. Perform log-off and log-in only if current user differs from the one from previous test. To ensure maximum efficiency order the data set by username.
  2. Before selecting a country ensure it's not already selected. Also, add additional ordering (by country) to ensure minimum reloads of city ddl.
  3. If a successful city selection redirects to another screen and a failed selection simply display an error message on the same screen consider ordering the test data set by test result so the successful submission happens at the very end of the tests for current city for current user. This can minimize to number of screen redirections.
  4. ...

2. Timeouts

When searching for an element of waiting for something to happen you need to define timeout values. This is to ensure that your test ends in a reasonable time even if something goes wrong. If a single iteration reaches any of the timeouts it should be mark as failed and test should continue with the next iteration.

Timeouts values are usually hard to define at the beginning. They depend on many factors like: application type, machine performance, bandwidth etc. Timeouts are usually adjusted after couple first runs on a bigger data-set.

The default timeout values that you can use when designing your test should be a bit higher than a required minimum. If you see after the first test run that too many iterations failed because of the timeouts increase them slightly and re-run the test. The perfect situation is when 100% tests pass and the whole run doesn't take too long. This may be hard to achieve if the tested app and testing environment are not stable enough.

3. Check for existence rather than not existence

Whenever you are thinking about adding a test step that would check if an element doesn't exist consider finding an alternative existence check.

Example:
You want to check that after a button is clicked on a web page the invoked action completes successfully. You can either verify non-existence of an error message or existence of a success confirmation. Both checks require answering some tricky timeout questions (e.g. how long would you wait for the message to appear?). However, verifying non-existence has a serious performance issue - each successful test run would wait for the whole allowed time limit whereas existence check would complete immediately after success confirmation appears.

4. Locate elements wisely

When you create a test there are several ways to locate an element that you need to perform an action on. Some older tools allows you only to move a mouse to a location defined by coordinates e.g. move mouse 100px left anf 50px down from the edge of the screen or browser window. This is not reliable as the coordinates depend on screen resolution, browser window size etc. Current testing tools allow you to locate the desired element using different approaches.

Example:
If you're testing a web page you can identify an element in HTML DOM by tag name or using its attributes (like id, name etc). You can do the same with apps that use XAML (like Silverlight). I don't have experience with testing regular desktop apps but I'm quite sure there is a way to avoid coordinates.

5. Avoid often locating

This one is related to the previous point. Even if you use an reliable method to find an element in GUI don't forget about efficiency. Always try to optimize your search for an element to save some precious time. If possible, try to keep in memory the elements that you often interact with to avoid multiple locations of the same element.

Example:
Let's reuse the example with cascading drop down lists described above. You can locate the first one using any reliable technique (e.g. HTML DOM search). The second one will probably be its sibling or they share a parent indirectly. Us this to locate the 2nd DDL rather than searching through entire DOM again. Once you have them in memory execute actions and checks on them without any additional locating necessary.

6. Avoid pauses

Fixed length pauses will always affect the performance of your test. A tester may think about using a pause when a test step needs to wait for something to happen before it can execute. An alternative to a pause is a "wait-step". Wait-step waits for a condition to be fulfilled. The advantage of this approach is it will only take as much time as required. Also, it may be more reliable because it may wait longer than you would specify a pause for if something takes longer than usual.

Example:
The tested UI contains an animation that normally takes around 2 seconds, but under some circumstances (e.g. slow machine, low bandwidth) can take a bit longer. When using pauses you'd probably define a 3 seconds long one to have some reserve. This would cause each test run to run 1 second longer than required (under normal circumstances). Also, if animation is unusually slow and 3 second is not enough the following test step may fail.

You can eliminate those threats by using a wait-step instead. The challenge here is to define an appropriate condition. Let's say our animation ends with displaying an image on the screen. As a test condition you could use image visibility i.e. wait until image is visible.

7. Hooks in tested apps

It's a commonly accepted practice to include in the application that is being tested some "hooks" for UI tests. Hooks are pieces of code that help invoking some actions by the testing framework. In theory, none of the hooks should be required to complete the tests. An UI test should do exactly the same what an end user would do e.g. move the moue cursor over the button and click it instead of invoking button's click action in code. In practice, there may be some circumstances when using hooks is justified.

Example:
I've been recently working on UI tests for Silverlight app. One of the screens contained a world map for region selection. The regions were not separate GUI elements so it was hard to select an appropriate one with my UI test. The application itself was recognizing which part of the map was clicked basing on some twisted pixel colour logic. With no hooks available I would have to record the mouse click for each region available using coordinates, which is not good at all (see 'Locate elements wisely' point). In addition, defining a new region in app would require adding new coordinates to test.

Instead, I asked developers to include an additional method in code that would allow me appropriate selection using a region name. My testing framework supports executing public method on Silverlight objects. This was just 2 lines of code and didn't introduce any threat. The method was actually doing what a mouse click on a region would cause. Also, the region selection wasn't really in scope of my UI tests but just a step required to move to the screen that needed to be tested.

8. Dynamic URLs

If you are working on a web application tests it is useful to make the url of the tested app configurable. This would allow testing different builds (dev, system-test, live, etc) with the same test script. If your test is data-driven you can specify url in data source as you would do with any other test data.

9. Recovery

If your tests take a long time to complete (e.g. data-driven tests with many iterations) it's a good practice to implement a recovery mechanism. Remember that it is always possible that the tested app or browser window closes unexpectedly. You don't want to find out that the tests you left running for the whole night stopped after 1h because the app crashed. If your testing framework allows that you can check at the beginning of each iteration if the tested app/webpage is available and restart it if required.

10. Logging

Log the results of your tests so you can easily identify reasons for any failures. If you are designing a data driven test it is very useful to have a test summary at the end. Another useful practice is taking browser or desktop screenshots on failure. The screenshots can tell you what went wrong much faster than a complex exception info.

Example:
In the summary part of my data-driven UI tests I always print comma-separated list of IDs of failed tests. After such test completes I can easily copy-paste the ids into my sql query that retrieves test-data and easily re-run only failed test.

11. Success-oriented tests

If you are creating a test that will be executed multiple times using different data remember that for a healthy application and test dataset it should have a high pass rate. As a "pass rate" I mean the percentage of passed iteration/runs. Very often, when working with incomplete target application or test data at the beginning my tests have a low pass rate and take very long time to complete. I'm tempted then to update the test so it performs faster under current circumstances. Rather than doing that you should focus on correcting your test dataset or making appropriate developers improve the target app (e.g. by fixing bugs). Of course introducing tweaks to your test is justified if they will also improve performance in case of complete target app and test dataset.

12. Further Reading

If you're interested in UI Test Automation you can also see my other posts:

Saturday 22 January 2011

What is UI Test Automation about?

In my last post I reviewed a tool for UI Test Automation for Silverlight. For those of you who have never worked on UI tests automation before I decided to explain what it's actually about.

The goal is to create a set of tests that would simulate user interactions with the interface. It's not only about manual actions like mouse click but also about visual verification of the expected result. Example: if a tester clicks a button and visually checks that it caused a message to appear you'd need to create at least 2 steps for that (covering the click and the message check).

The tool I described earlier this week offers an intuitive test recorder that integrates with Internet Explorer. It simply follows user interactions with the webpage and records each manual action as a graphical step. The visual verifications steps need to be added manually. This part requires more caution as it may be crucial for test results. A human would immediately spot an error message appearing on the screen. An automated test will only mark this test as failed if it contains appropriate verification step. Obviously the richer the app is the tricker it gets. A lot of visual effects (animation, popups, drag&drops) can cause you a serious headache ;)

Having a recorder available makes creation of basic tests much easier. Recorded actions are presented as graphical steps in your test project. They can be reordered, reused or combined with other elements (e.g. with if/else logic). More advanced tools offer data binding of the steps without writing any code. Example: let's say your test fills a form and submits it. While recording you provided some data but would like to retest the scenario using different data sets. If you data-bind the test you'll be able to re-run it automatically for each dataset available in source. Available data sources differ between tools but most popular ones are database, excel spreadsheets and xml files.

Although recorder and graphical steps make it easier to start with basic test I found myself creating most of the tests in code. It is possible to convert each graphical step into a coded step as well. This give you more control over the test. Some functionality is usually only available via code because it is almost impossible to implement all the possibilities offered by a programming language in a graphical tool. I can't imagine creating a complex test relying only on options offered by graphical steps. This may be a serious problem for testers with no programming experience.

In my next post I'll try to provide some tips & tricks that I learned while working on UI Test Automation.

PS. The provided examples are usually related to web applications. This is because I mainly work on such apps. However, all the rules mentioned above also apply to desktop applications.

Wednesday 19 January 2011

UI Tests Automation for Silverlight

I'm currently working on a UI test automation task for Silverlight interface. There are not many tools available for that so we decided to evaluate the most popular one i.e. Telerik's WebUI Test Studio. I chose Developer Edition as it easily integrates with Visual Studio. For testers, managers etc there is also a standalone version available.

After a few days of playing with it I can already say it's quite powerful. Once you get familiar with it and learn a few tricks you can easily develop complex UI tests that would save your team a lot of time. Below is a short summary of its pros & cons:

Pros:
  • Supports Silverlight testing

  • Intuitive test recorder

  • For basic tests no coding required (also including logic: if/else, loops)

  • Integrated data access (spreadsheets, csv, database, ...) for data driven tests

  • Multiple video tutorials available

  • Full integration with Telerik's RAD controls

  • Strong community

  • Fast support

Cons:
  • Documentation doesn't cover the entire functionality

  • More complex tests require more coding than recording

  • Support for Silverlight available but yet limited (some additional coding required e.g. when veryfing the content of a ComboBox)

  • Dev & QA Editions differ slightly in available functionality (although I'm not sure that's really a con)

All in all, I would recommend it. I don't think there is really an alternative on the market. Is there?