Wednesday, 24 October 2012

Using task based async WCF client methods in synchronous scenarios

If you are already working with the Visual Studio 2012 you may have noticed that the dialog box for generating Service References has changed a bit. It offers an additional option to generate task based asynchronous client methods (see picture below). In VS2010 you could generate async operations for the service client as well, but in VS2012 you have the additional choice:
  • Generate asynchronous operations
    This one gives you the exact same result as choosing to generate async operations in VS2010 i.e. for each service method you will have 2 additional client methods: *Async & *Completed. One of my previous posts partially explains how to use them and provide more links on that topic.
  • Generate task-based operations
    When writing your async code this option allows you to take advantage of the Task Parallel Library (TPL) that was introduced with .Net 4. This post is not meant to be a TPL tutorial, but there are many sites explaining the concept. My personal favourite is "Introduction to Async and Parallel Programming in .NET 4" by Dr. Joe Hummel. For now it is enough to say that using tasks (i.e. TPL library) can make handling your async scenarios easier and the async code more readable/maintainable.

The interesting thing is that since .Net 4.5 and its async/await feature you can easily benefit from using task-based async operation even in fully synchronous scenarios. You may start wondering what can be advantages of using async techniques & libraries in fully synchronous scenarios. The answer is: performance! If you are using regular synchronous methods, they block current thread until it receives the response. In case of task-based operations combined with async/await feature the thread is released to the pool while waiting for the service response.

Obviously this advantage only applies in some scenarios e.g. in web applications running on IIS, where requests are handled in parallel by threads from the pool. If you are working a single threaded client app you will not benefit from this approach.

Sample code

In this post I'll re-use the sample code from my last post. I'll convert synchronous calls to my sample web service so they use task-based async operations. So, let me remind you the interface of the web service that we will use:

[ServiceContract]
public interface IStringService
{
    [OperationContract]
    string ReverseString(string input);
}
Now, let's update the service client. There is actually not much conversion that needs to be done. First, you need to ensure that the generated code includes task-based async operations (right click on service reference -> "Configure service reference"). Once you have async operations generated, you need to transform the existing code to work with tasks and use async/await feature:
public async Task<string> ReverseAsync(string input)
{
    return await _client.ReverseStringAsync(input);
}
Comparing to the original, purely synchronous method we have following changes:
  • Async keyword added to the method signature
  • Method return type changed to Task<string> (i.e. original return type 'wrapped' with Task)
  • Different service client method used: ReverseStringAsync instead of ReverseString
  • Await keyword added before the client method call
  • "*Async" suffix added to the method name (recommended convention)
These changes are enough to start taking advantage of async features of .net 4.5. We only need to update our tests, but there is only a few changes here as well.

Updated integration test:

[TestMethod]
public void TestStringHelper_Reverse()
{
    StringHelper sh = new StringHelper();
    string result = sh.ReverseAsync("abc").Result;
    Assert.AreEqual("cba", result);
}
Notice that I only added a call to .Result property as this time the method returns Task.

And the unit test:

[TestMethod]
public void TestStringHelper_Reverse()
{
    // Create channel mock
    Mock<IStringServiceChannel> channelMock = new Mock<IStringServiceChannel>(MockBehavior.Strict);

    // setup the mock to expect the ReverseStringAsync method
    channelMock.Setup(c => c.ReverseStringAsync("abc")).Returns(Task.FromResult("cba"));

    // create string helper and invoke the ReverseAsync method
    StringHelper sh = new StringHelper(channelMock.Object);
    string result = sh.ReverseAsync("abc").Result;
    Assert.AreEqual("cba", result);

    //verify that the method was called on the mock
    channelMock.Verify(c => c.ReverseStringAsync("abc"), Times.Once());
 }
The main thing to notice here is that we use Task.FromResult to create a task wrapping our sample result when mocking the client method.

Asp.Net MVC

I already mentioned that described approach will only be beneficial in some types of apps e.g. webapps running on IIS. A sample Asp.Net MVC4 controller action using our async client could look as follows:
public async Task<ActionResult> Index()
{
    ViewBag.Message = "Reversed 'abc' string: "
                      + await _stringHelper.ReverseAsync("abc");
    return View();
}
Again, notice async and await keywords added to the action method, supported in MVC4.
A detailed tutorial for using aync/await with MVC4 can be found here.

Sample code for this post on my github:
https://github.com/filipczaja/BlogSamples/tree/master/MockingWCFClientWithMoqAsync

Monday, 15 October 2012

Mocking WCF client with Moq

Performing basic web service calls from your code using WCF is relativelly easy. All you have to do is add a new service reference to your project, pointing to the service url. WCF will automatically generate a client class for you, that you can use to call service methods.

The web service

Let's say we have a web service that performs various string transformation e.g. reverses specified strings (obviously it's not a real life scenario, as you wouldn't normally call a web serviec to do that). The service implements the following interface:

[ServiceContract]
public interface IStringService
{
    [OperationContract]
    string ReverseString(string input);
}
In the sample code for this post I created a basic WCF implementation of that service. However, the service itself doesn't need to be created using WCF, as long as it's using SOAP.

The client

The simplest class that consumes this service could look as follows:
public class StringHelper
{
    StringServiceClient _client;

    public StringHelper()
    {
        _client = = new StringServiceClient();
    }

    public string Reverse(string input)
    {
        return _client.ReverseString(input);
    }
}
The StringServiceClient is a class generated automatically when adding a service reference. All you have to do is istantiate it and call the chosen method.

There is one issue with that approach though: you cannot unit test your StringHelper.Reverse method, without actually calling the web service (because classes are strongly coupled). When writing proper unit tests you should mock all the class dependencies, so you can only focus on a single unit of code. Otherwise it becomes an integration test.

When using Moq you can only mock interfaces or virtual methods. The generated StringServiceClient doesn't implement any interface that would expose the service contract. Also, methods generated in that class are not virtual.

Luckly enough the code generated when adding the service reference contains the Channel interface that we can use. The channel interface extends the service contract interface, so you can invoke all service methods using its implementation. This mean we can update the client app can to remove the tight coupling:

public class StringHelper
{
    IStringServiceChannel _client;

    public StringHelper()
    {
        var factory = new ChannelFactory<IStringServiceChannel>("BasicHttpBinding_IStringService");
        _client = factory.CreateChannel(); 
    }

    public StringHelper(IStringServiceChannel client)
    {
        _client = client;
    }

    public string ReverseString(string input)
    {
        return _client.ReverseString(input);
    }
}
As you can see, instead of working with the generated client we create a channel instance using ChannelFactory and the binding name "BasicHttpBinding_IStringService". The binding name can be found in the app.config file. The app.config file is automatically updated with WCF enpoint configuration when adding the service reference.

Testing

A simple integration test for our client code:
[TestMethod]
public void TestStringHelper_Reverse()
{
    StringHelper sh = new StringHelper();
    string result = sh.Reverse("abc");
    Assert.AreEqual("cba", result);
}
This test would work with both versions of the client presented above.

Now for the actuall unit tests that mocks the service channel object using Moq:

[TestMethod]
public void TestStringHelper_Reverse()
{
    // Create channel mock
    Mock<IStringServiceChannel> channelMock = new Mock<IStringServiceChannel>(MockBehavior.Strict);

    // setup the mock to expect the Reverse method to be called
    channelMock.Setup(c => c.ReverseString("abc")).Returns("cba");

    // create string helper and invoke the Reverse method
    StringHelper sh = new StringHelper(channelMock.Object);
    string result = sh.Reverse("abc");
    Assert.AreEqual("cba", result);

    //verify that the method was called on the mock
    channelMock.Verify(c => c.ReverseString("abc"), Times.Once());
}

Sample code for the service, client & test on github:
https://github.com/filipczaja/BlogSamples/tree/master/MockingWCFClientWithMoq

Wednesday, 19 September 2012

The simplest chat ever... just got simpler

In my last post I described a simple chat implementation using the SignalR framework. At the time of writing it I thought it’s really as simple and clean as it can get. However, my work colleague Lukasz Budnik proved me wrong by suggesting a simpler solution that they used while creating hackathon.pl portal. He suggested replacing my server side components with a cloud hosted push notifications provider.

I admit I was thinking in a more traditional way, where all application components are hosted in-house, on servers owned by our company or our clients. Nowadays, it’s often a better idea to use exiting cloud hosted solutions than implementing some system components by ourselves. I definitely recommend considering this option if you have no important blockers preventing you from using SaaS solutions (e.g. like legal issues related to sending some data to third parties). Advantages that come with cloud are topic for another discussion though.

Let’s have a look how our chat application changes with this new approach. First of all, we need to choose a push provider. And yet again - Lukasz to the rescue with his latest post, describing various SaaS solutions. I decided to go with the PubNub, as its documentation already contains a chat example. So it’s a no-brainer, really.

The entire solution is now enclosed within a single html file. As mentioned at the beginning you don’t need to write any server code. The additional benefit is easier development and testing, as you don’t even need a web server, just your regular web browser.

The html file looks as follows. Please note that the html components are exactly the same as in my previous solution. In addition, PubNub allows you to bind events to DOM elements, so we don’t need jQuery in this example.

<label for="nick">Your chat nickname:</label>
<input id="nick" name="nick" type="text" />
<label for="message">Message:</label>
<input id="message" maxlength="100" name="message" type="text" />
<div id="chatWindow"></div>

<div pub-key="MY_PUBLISHER_KEY" sub-key="MY_SUBSCRIBER_KEY" ssl="off" origin="pubsub.pubnub.com" id="pubnub"></div>    
<script src="http://cdn.pubnub.com/pubnub-3.1.min.js"></script>
<script type="text/javascript"></script>

<script type="text/javascript">
(function () {
    // declare some vars first
    var chatWin = PUBNUB.$('chatWindow'),
        nick = PUBNUB.$('nick'),
        input = PUBNUB.$('message'),
        channel = 'chat';

    // subscribe to chat channel and define a method for handling incoming messages
     PUBNUB.subscribe({
          channel: channel,
          callback: function (message) {
              chatWin.innerHTML = chatWin.innerHTML + message;
          }
     });

     // submit message to channel when Enter key is pressed
     PUBNUB.bind('keyup', input, function (e) {
         if ((e.keyCode || e.charCode) === 13) {
             PUBNUB.publish({
                 channel: channel,
                 message: nick.value + ': ' + input.value
             });
         }
     });
})();
</script>

Thursday, 13 September 2012

The simplest chat ever with SignalR (and Asp.Net MVC)

Lately I was investigating some technologies for my current project and at some point I was advised to check out the SignalR library. I admit I’ve never heard about it before (shame on me). It was mentioned to me in the context of asynchronous web applications, with the main example being a web chat. Hmm, an interesting exercise I thought! Plus it should give me the general framework understanding. Let’s do it!

It was only later that I realised the chat implementation is the most common example for SignalR usage that you can find on the Web. One more can hurt though, so here is my version :)

Overview

In short, the app works in the following way: people can visit a single chat url and they join our chat automatically. This mean they will see all the messages posted to the chat since they have joined it. They can also post messages that will be seen by other chat members.

I’ve built it as part of a sample MVC4 application, although this could probably be any Asp.Net page, as I’m not using any MVC functionality at all. Two most important parts of this app are the view and the hub. The view displays chat components and handles user actions (i.e. sending messages to the hub). The hub listens for messages and publishes them to all connected clients.

The code

You will need to downlaod the SignalR library for the following code to work. The easiest way to do that is by searching for 'SignalR' in your Nuget package manager.

So I have my Chat controller, which is only used to display the view. As you can see it’s doing nothing else, so the view could probably be a static html page instead:

public class ChatController : Controller
{
    public ActionResult Index()
    {
        return View();
    }
}

It’s a web chat, so we need some input fields for nickname & message. We also need an area for displaying our conversation. All these belong naturally to our view:




Now that we have basic visual components, let’s make them work. Firstly, we need to reference jQuery & SignalR libraries. I have those defined as bundles in my MVC app, but you can reference all files directly:

@Scripts.Render("~/bundles/jquery")
@Scripts.Render("~/bundles/SignalR")
Notice the third reference - we reference a script, that doesn’t physically exist. SignalR will handle that script request, generating in response javascript code allowing us to communicate with the chat hub. The /signalr part of the uri is configurable.

Now it’s time for the javascript code (see comments for explanation):

$(function () {
    // get the reference to chat hub, generated by SignalR
    var chatHub = $.connection.chat;
 
    // add a method that can be called by the hub to update the chat
    chatHub.publishMessage = function (nick, msg) {
        var chatWin = $("#chatWindow");
        chatWin.html(chatWin.html() + nick + ": " + msg );
    };
    
    // start the connection with the hub
    $.connection.hub.start();
    
    $(document).keypress(function (e) {
        if (e.which == 13) {
            // when the 'Enter' key is pressed, send the message to the hub
            chatHub.sendMessage($("#nick").val(), $("#message").val());
            $("#message").val("");
        }
    });
});
The hub code could not be any simpler. We need to implement a method that can be called by a client wishing to send a message. The method broadcasts that message to all connected clients:
public class Chat : Hub
{
    public void SendMessage(string nick, string message)
    {
        Clients.PublishMessage(nick, message);
    }
 }
In case you haven't noticed yet: we execute a hub server method from our client javascript code (see line 17 of js code) and the other way around i.e. client side js function from our C# hub code (see line 5 of hub code). How cool is that?!?

I must say I'm pressed how easy it is to use for programmers. If you are wondering how it's working under the hood I recommend you reading SignalR documentation on their GitHub page. And finally, here's the screenshot I've taken testing that chat using 2 different browsers:

You can download my VC2012 solution from here: DOWNLOAD.

Monday, 10 September 2012

AdMob ads in PhoneGap apps

Adding ads to your free mobile apps is one of the most common way of earning money in this business. There are multiple ads engine providers, but Google is still the leader. Google ads provider for mobile devices is called AdMob. It offers easy integration with Android, iPhone/IPad and WP7.

While integration with native apps is quite straightwforwrd and well documented, things get more complicated if you create your apps using PhoneGap and HTML5. In the past Admob offered "Smartphone Web" option, that could be presented within your HTML code for smartphone devices. Since May 2012 it's no longer available, as Google wants to make a clear division: use AdMob for purely mobile apps and AdSense for web pages.

Since mobile apps created using PhoneGap are actually web pages displayed within the native app and rendered using default browser engine it seems that AdSense is the way to go. It doesn't work though, with one of the reasons being that Google wants to crawl the sites it displays AdSense ads on.

Solution

So how do we display Google ads in HTML code that belongs to our PhoneGap app?
The answer is: WE DON'T! :)

Instead, we modify our native app container, that displays the HTML. Default PhoneGap templates for all systems (Android, WP7, iOS, ..) create a native, full-screen container for displaying your web app. We can shrink that main container and then display ads on the remaining free space using native AdMob SDK.

The only limitation of this method is that you cannot place your ad inside of your actual HTML appliaction. However, you can still place your ads in most commonly used spaces i.e. header or footer.

Android example

The sample code below presents the MainActivity of an Android app that uses PhoneGap. The code adds AdMob banner at the bottom of a mobile app:
import android.os.Bundle;
import org.apache.cordova.*;
import android.widget.LinearLayout; 
import com.google.ads.*;

public class MainActivity extends DroidGap {
    private static final String MY_AD_UNIT_ID = "YOUR_AD_UNIT_IT"; 
    private AdView adView;
    
    @Override
    public void onCreate(Bundle savedInstanceState) {
        // Loading your HTML as per PhoneGap tutorial
        super.onCreate(savedInstanceState);
        super.loadUrl("file:///android_asset/www/index.html");
        
        // Adding AdMob banner
        adView = new AdView(this, AdSize.BANNER, MY_AD_UNIT_ID); 
        LinearLayout layout = super.root;  
        layout.addView(adView); 
        adView.loadAd(new AdRequest());
   }    
}

WP7 example

The sample code below presents default XAML created by PhoneGap template for WP7, modified to display AdMob banner at the bottom of a mobile app.

    
       
        
    
    
    
    
    

    
    
    

Thursday, 6 September 2012

jQuery Mobile apps flicker on page transitions

Flickering screen on page transtions seems to be a common issue in mobile applications created using JQuery Mobile (and most likely packed with PhoneGap). The most common solution you will find on the Web is to add the following piece of CSS to your html:
.ui-page {
    -webkit-backface-visibility: hidden;
}
The problem with that solution is that it breaks Select list on your forms (and apparentally any other form input fields as well) on Android system. This makes your forms unusable.

Some people create an additional workaround for that issue i.e. they change this style property direct before the page transition occures and disable it right after it completes. A bit messy, don't you think?

Luckly, there is a much simpler solution. I realised that this flicekring is caused by the default transition effect used i.e. 'fade' for page transitions and 'pop' for dialogs. The simplest way to fix it seems to disable any effects of page/dialog transitions. Here is how you can do that with a little javascript code:

$(document).bind("mobileinit", function(){
    $.mobile.defaultDialogTransition = "none";
    $.mobile.defaultPageTransition = "none";
});

Monday, 20 August 2012

What to remember about when adding other languages to your website

So you want to go international with your website, huh? In this post I'll try to summarize all the possible changes that you will have to go through. The complexity of this upgrade will obviously depend on the current design, framework you use etc. The sooner you start thinking about it the better, as this will save you some mundune work in future.

Unfortunetally translating your website is not always as easy as creating an alternative language file. I've learned that recently while translating my Polish project www.DziennikLotow.pl into an English version www.MySkyMap.com.

The checklist below may help you to remember about some important tasks that need to be completed as part of translation process:

Domain name

Before you introduce a new language you have to think if you need a new domain as well. You will probably get away with a single domain in case it's one of the common, worldwide recognized domains, like .com, .net etc. In that case you still need to define your strategy for serving translated content for users accessing your website for the first time. There are several options here:
  • Default language
    Until visitors chooses to change their language you serve the content using a default one e.g. English
  • Browser settings
    You can check the default browser language set by the user and serve translated content basing on that (if available)
  • Language specific url
    When advertising your website you can use links that have the language defined as part of the url e.g. http://www.yourdomain.com/en/ or using subdomains e.g. http://www.en.yourdomain.com
Once the user selects the language manually or logs in you have more options e.g. storing preferred language in user session, passing additional url params etc.

In the situation when you have a country specific domain (like .pl) and you would like to expand, it is probaly better to register a domain with a common ending (like .com) for the new language version. You can then determine the language to use depending on the domain name. The small disadvantage of this approach is that it may not be possible to change website language without loosing session data, unless you have some CDSSO (Cross-Domain Single Sign On) implemented.

Alternativelly after registering a global domain you can abandon the local one (or make it redirect to global) and then use mechanisms described above.

Static texts

Translating the text content is usually the first part that you think about when starting to add support for a new language. This covers not only text blocks, but also things like navigation menus, alerts, image alternative texts, page meta information (keywords, desc, etc) and many more. Everything that is a part of a HTML response and is language-specific should get transleted.

Most of the modern web applications frameworks support creating different language versions out-of-the-box (OOTB). However, it is developer`s responsibility to make use of the internationalization (I18N) functionality offered by the engine they use. This has to be thought about from the very begining, so you don't end up with string that need to be translted hard-coded in your application.

The most common way of achieving I18N is to create separate files for each language, that contain all possible static text content that can be displayed by your website. Each file usually contains text messages that can be identified by a uniqe key. For example, this is how simple language files for English & Polish languages could look like:

ThankYou = Thank you
Goodbye = Goodbye
Please = Please
ThankYou = Dziękuję
Goodbye = Do widzenia
Please = Proszę
Note that the same keys are used to identify the messeges in both files.

As a popular alternative you could store all messages that require translation in a databse. See my other post on that. I believe this appraoch is more complex than language files, and therefore I only use if it's really required.

Warning: Javascript - if you reference any static javascript files that include some messages that need to be translated you will have troubles using default I18N mechanism in most of the frameworks. One possible solution is to serve those files dynamically and injecting the translted messages before returning the server response with javascript content. You will find more alternatives on the Web.

Graphics

In general, it is not a good practice to present any text content using graphics. It is because graphics take more time to load than a regular text and require additional HTTP requests (they can be cached but still). Good graphic designers remember about that when creating their desings. However, it still may happen that graphics on your website present some text e.g. logo, fancy menu etc.

If your website contains any graphics presenting language specific texts you will need to create alternative versions for each language. You will also need to create mechanism for displaying them depending on the current language. That part can be easily achieved with the language files used for static text content.

Data formats

When creating a localized version of a website content you should also care about data formats used in the country you are preparing content for. Common elements that can be presented using different formats depending on the country are numbers, date & time, money amounts etc.

Updates

Nowadays most of the popular websites share some updates with their users on a regular basis. Most of them has its "News" or "Blog" sections, so does probably yours. When you add a new language support to your website, it is important to choose your strategy. Basically you have 2 main options:
  • Default language only
    If you assume that a vast majority of your users either uses the default language or at least understands it you can write all the updates in it e.g. in English
  • Language specific updates
    If you can afford the time required to translate all your updates it will be always appriciated by your users if you create different language versions for each update. However, don't use automatic translation because this may have exact opposite result. Again, most of the frameworks used for posting content (e.g. Wordpress) support serving multiple language versions.

Changing language

When adding support for multiple languages you will obviously need a widget for changing the language. There are plenty of types (with flags, with language names etc). The widget is usually placed in the top right corner of your page. Altghough some types looks 'cooler' you have to rememebr about accesibility. In most cases simple solutions are the best ones.

Remember that except the UI part the widget will need to work with your language selection mechanism on the server side.

Other stuff...

The aspects of i18n described above are the most basic ones and apply in most cases. Except those, there are also challenges more specific to you website. In my case these were:
  • Rewriting Urls
    Since I have a different domain for a new langauge I needed to add url rewriting rules to .htaccess file, so the website works ok for the new domain.
  • Emails
    I had to translate existing email templates.
  • Facebook Connect
    When configuring Facebook connect you provide app Id. That app is specific to a single domain, so I neede to create a separat facebook app for my new domain.
  • Facebook Fanpage
    The same situation as described in "Updates" section i.e. do you want to have 2 separate fan pages and create a single one with content in the default language?
  • Analytics
    I've setup a new google analytics site to separate stats coming from different domains.
As you can see the process of adding support for a new language may require much mroe work than it seems at the begining. I admit that I still have somethings to do, as I implemented only the most important changes.

SCRUM or SCRUMish?

Last week a bunch of us attended an external SCRUM training here in Gdansk. The main reason for me for participating was to systematize my Scrum knowledge. I already knew some basics but I don’t have any practical experience with real Scrum project. I’ve never worked on such in Kainos, although some of them “borrowed” some Scrum rules.

Except the basics, I was hoping to hear about real life examples and best practices. Also, I was keen to learn about biggest challenges when adopting Scrum.

Our trainer was a certified Scrum expert and practitioner. He claimed he has introduced SCRUM to his current company and they are successfully using it for all of their software development projects ever since. So he seemed to be the right person for the job.

Now to the actual training - it was a bit too theoretical for my liking. It consisted of a lot of SCRUM theory, charts and stats (over 200 slides if I recall correctly). Probably nothing that you wouldn’t find on the web though. In my opinion it lacked some practical exercises, workshops, detailed case studies, etc.

However, if something was unclear our trainer tried to explain it by providing examples from his personal experience as a Scrum Master. Multiple times he started explaining a concept with words "In my company we are doing it like this...". And this is actually what got me wondering, if a pure Scum is even possible.

Although the trainer was obviously a great Scrum enthusiast he admitted multiple times that very often they need to adjust the process, so it is actually not strictly following Scrum guidelines. As the main reason for that he named customer expectations and the market/economy factors. Some examples of this inconsistency would be:

  • not using unit tests because their client doesn’t want to pay for them
  • having some scrum team members only partially involved in the Sprint e.g. testers, graphic designers etc
Some of their adjustments I found justified (graphics involved only part time) and some not (lack of unit tests). However, I have no Scrum experience and only know some theory. If I were responsible for adapting Scrum in one of the projects how could I know which rules can I safely change? This is obviously a question of common sense, plus I could use experience of my colleges, but this still leaves me with some doubts. Is it still Scrum, or only Scrumish? I appreciate the flexibility coming from the "whatever works best" approach, but those rules were invented for a reason and there is a danger of me not seeing some hidden pitfalls.

Is there anybody out there who can say they participated in a pure SCRUM project and followed all the rules? I’m really interested if this is possible.

Or maybe you just treat SCRUM as a loose set of guidelines and you pick only the most valuable ones for you?

PS. The highlight of the training was when one of our Technical Architects commented the lack of unit testing in the projects led by our trainer with words that can be translated as: “That’s SO LAME!!!”. The comment was repeated later on multiple times by other attendees on other occasions. We all agreed it should belong to official SCRUM terminology ;)

Tuesday, 3 July 2012

Continuous Integration with CRM Dynamics 2011

Lately my work colleague Thomas Swann described a bunch of tools that would make the life of the Crm Dynamics 2011 developer much easier. In this blog post I will show a practical example of how we can use them to automate the build & deployment and enable continuous integration in our Dynamics project.

We will create a MSBuild task that packages a CRM solution source into a single package (using Solution Packager) and then deploys it to CRM server using CRM SDK. The example will also allow to revert the entire process i.e. export a solution package from CRM and unpack it.

CRM access layer

Let's start by creating a layer for communicating with CRM using SDK. You will need to reference microsoft.xrm.sdk.dll and microsoft.xrm.sdk.proxy.dll assemblies to make the code compile.
/// 
/// Class used to establish a connection to CRM
/// 
public class CrmConnection
{
    private Uri _organizationUrl;
    private ClientCredentials _credentials;
    private OrganizationServiceProxy _service;

    public CrmConnection(string organizationUrl, string username, string password)
    {
        _credentials = new ClientCredentials();
        _credentials.UserName.UserName = username;
        _credentials.UserName.Password = password;

        this._organizationUrl = new Uri(organizationUrl);
    }

    public IOrganizationService Service
    {
        get
        {
            if (_service == null)
            {
                _service = new OrganizationServiceProxy(
                                     _organizationUrl, null, _credentials, null);
                _service.ServiceConfiguration.CurrentServiceEndpoint
                                    .Behaviors.Add(new ProxyTypesBehavior());
               _service.Authenticate();
            }
            return _service;
        }
    }
}

/// 
/// CRM Solution Manager for performing solution operations
/// 
public class SolutionManager
{
    IOrganizationService _service;

    public SolutionManager(IOrganizationService service)
    {
        _service = service;
    }

    /// 
    /// Imports a solution to CRM server
    /// 
    /// Path to solution package
    public void ImportSolution(string zipPath)
    {
        byte[] data = File.ReadAllBytes(zipPath);
 
        ImportSolutionRequest request = 
                  new ImportSolutionRequest() { CustomizationFile = data };

        Console.WriteLine("Solution deploy started...");
        _service.Execute(request);
        Console.WriteLine("Solution deployed");
    } 

    /// 
    /// Exports a solution package from CRM and saves it at specified location
    /// 
    /// Name of the solution to be exported
    /// Path to save the exported package at
    public void ExportSolution(string solutionName, string zipPath)
    {
        ExportSolutionRequest request = new ExportSolutionRequest()
        {
            SolutionName = solutionName,
            Managed = false
        };

        Console.WriteLine("Solution export started...");

        ExportSolutionResponse response = 
                               (ExportSolutionResponse)_service.Execute(request);
        File.WriteAllBytes(zipPath, response.ExportSolutionFile);

        Console.WriteLine("Solution successfully exported");
    }
}
This gives us a CRM access layer that we can use in our code (not only in msbuild task code). It allows us to import and export packages from CRM and save them to disk at specified location.

Custom MsBuild task

Now it's time to create custom MSBuild tasks that would utilize the SolutionManager described above. Let's start by introducing a common base for CRM tasks:
/// 
/// Base class for CRM tasks, including all details required to connect to CRM
/// 
public abstract class CrmSolutionTask : Microsoft.Build.Utilities.Task
{
    [Required]
    public string OrganisationUrl { get; set; }

    [Required]
    public string Username { get; set; }

    [Required]
    public string Password { get; set; }

    [Required]
    public string ZipPath { get; set; }

    protected SolutionManager SolutionManager
    {
        get 
        {
            CrmConnection connection = 
                          new CrmConnection(OrganisationUrl, Username, Password); 
            return new SolutionManager(connection.Service);
        }
    }
}
All the public properties from that class will be available as task parameters and are common for both tasks. Now let's create the Import tasks:
public class ImportSolutionTask : CrmSolutionTask
{
    public override bool Execute()
    {
        try
        {
             this.SolutionManager.ImportSolution(ZipPath);
        }
        catch (Exception e)
        {
            Log.LogError("Exception while importing CRM solution: " + e);
            return false;
        }
        return true;
    }
}
The ExportSolutionTask is actually very similar. Note that it defines an additional public property, which will also be used as task parameter, specific to that task only.
public class ExportSolutionTask : CrmSolutionTask
{
    [Required]    
    public string SolutionName { get; set; }

    public override bool Execute()
    {
        try
        {
             this.SolutionManager.ExportSolution(SolutionName, ZipPath); 
        }
        catch (Exception e)
        {
            Log.LogError("Exception while exporting CRM solution: " + e);
            return false;
        }
        return true;
    }
}

MsBuild script

Now that we have our custom build tasks coded let's make use of them in the MsBuild script. The following build script will be stored in CRM.build file and assumes we keep our custom tasks in "BuildTasks" project.

  
  
  
  
  
    
    
    SERVER_NAME
    
    
    SERVER_NAME
    
    ORGANIZATION_URL
    CRM_ADMIN_USERNAME
    CRM_ADMIN_PASSWORD

    CRM_SOLUTION_NAME
    $(MSBuildProjectDirectory)\CrmSolutions
    $(CrmSolutionsFolder)\$(CrmSolutionName).zip

    
    $(MSBuildProjectDirectory)\Tools
    $(ToolsFolder)\SolutionPackager.exe

    Debug    
  
  
  
  
    
      
    
    
    
    
    
  

  
  
  
    
    
  
  
    
    
  

  
  
  
    
    
  
  
    
    
  
  
  
  
    
  
  
    
  

To pack and deploy a solution to CRM run the "DeploySolution" target:
msbuild CRM.build /t:DeploySolution
To download and extract CRM solution run the "DownloadSolution" target:
msbuild CRM.build /t:DownloadSolution
Note that you can override all MsBuild properties from the command line when running that script.

Continuous Integration

We are closely coming to the end of this post and you have probably noticed that, despite the post title, I haven't mentioned the CI part yet. Well, how do you use described MSBuild tasks in your CI process is really up to you and your project needs. In my current project we are storing an extracted solution in our code repository. Our CI server is configured to run the "DeploySolution" target after each change to that source i.e. the code is packed and imported to our CRM test server. This assures that the CRM test server always uses the latest version of that solution.

Developers who work on the solution on their own CRM instances can use the "DownloadSolution" target to automatically obtain the updated solution package and extract it, so they don't have to do that manually.

Our Import and Export tasks can also be used to automate the process of moving solutions between environments.

Sunday, 1 July 2012

IdP initiated SSO and Identity Federation with OpenAM and SAML - part IV

This is the last part of the tutorial describing how to configure IdP initiated SSO and Identity Federation with OpenAM and SAML. The tutorial consists of 4 parts:
  1. Basic concepts & use case overview
  2. Sample environment configuration with OpenAM
  3. Using OpenAM SAML services
  4. Detailed look at SAML interactions
If you don't understand any terms or abbreviations that I use here please read the first part of the tutorial together with the Security Assertion Markup Language (SAML) V2.0 Technical Overview.

Detailed look at SAML interactions

At this stage you should have working IdP and SP test environments, both configured using OpenAM. You should also have a sample ProviderDashboard web application that uses SAML functionality exposed by OpenAM. All SAML operations are triggered by hyperlinks placed within that web application that point to specific OpenAM services.

At the end of previous chapter we described steps to verify if our Identity Federation and SSO processes are working correctly. You have probably noticed some browser redirections or page refreshes when performing verification tests. However, you would probably like to know what is exactly happening behind the scenes.

In this chapter I will explain how a browser communicates with the Idp (Identity Provider i.e. ProviderDashboard) and SP (Service Provider i.e. IssueReporter) during SSO and identity federation process. Please note that in my examples I'm using HTTP POST binding for sending assertions. Remember that Artifact binding is also available in OpenAM. For more details about SAML binding please refer to SAML technical overview linked above.

Identify federation and SSO

The following sequence diagram presents communication (happy path) between browser, IDP and SP while performing the SSO with initial identity federation i.e. user requests SP access from IDP for the first time (identities have not been previously linked). All requests names begin with HTTP request type used (GET or POST) and responses names begin with HTTP status code returned.

IdP initial authentication
This part of the flow covers regular authentication procedure for ProviderDashboard, that comes out-of-the-box after we configure an OpenMA agent to protect it. It consists of 4 HTTP requests:

  1. Unidentified user tries to access a protected resource (Provider dashboard). Agent doesn’t recognize the user so it sends redirection response
  2. User is redirected to login screen
  3. User provides their ProviderDashboard credentials (12345) and submits the login form. IdP OpenAM validates provided credentials, create the authentication cookie and redirects the user to protected resource.
  4. Browser requests protected dashboard, agent recognizes the user and let them through.

SSO initiation
At this stage we have a user authenticated with ProviderDashboard (IdP). Somewhere within the dashboard there is a hyperlink named “Report an issue”, initiating the identity federation and SSO with IssueReporter (SP). The hyperlink points to the idpssoinit endpoint exposed by OpenAM installation used for ProviderDashboard and described in detail in previous chapter.

When the user clicks the described hyperlink OpenAM generates the SAML assertion that will be send to the configured SP. If you would like to see the content of the generated assertion you can check OpenAM logs at:

\openam\debug\Federation
You need to ensure you have set OpenAM Debug level to 'Message' level. All the assertion`s elements are explained in SAML technical overview.

When the SAML assertion is created the idpssoinit endpoint includes it in the HTTP response that is send back to the browser as result of clicking the hyperlink. The response contains an HTML form with a small piece of javascript, that causes the form to be automatically submitted (by POST http request) when browser receives that response. Sample response body can look as follows:

<HTML>
<HEAD>
   Access rights validated
</HEAD>
   <BODY onLoad="document.forms[0].submit()">
      
</BODY> </HTML>
The following table describes response segments that depend on IDP and SP configuration:
Segment name Description
SP_Assertion_Receiver Url of an SP endpoint that receives and processes assertions from IDP (Assertion Consumer Service).
SAML_response Base64 encoded and zip compressed SAML response with the generated SAML assertion.
final_desitnation Final SP destination as specified in hyperlink.

Identity federation and SSO
Once Assertion Consumer Service exposed by SP receives an assertion it should establish the federation between identities and perform SSO. The default way of doing this when using OpenAM as the SP consists of following steps:

  1. Validate the received assertion i.e. check digital signature and specified assertion conditions.
  2. Extract the persistent identifier from the assertion and search for an SP identity that has this token associated. In our scenario there should be no user found as the federation has not been established yet.
  3. Remember the persistent identifier and RelayState parameter value and redirect the user to SP login page.
  4. When the user provides valid credentials save the remembered persistent identifier against that user in SP data store.
  5. Log that user in using SP`s default authentication mechanism.
  6. Retrieve the remembered RelayState value and redirect the user to that url.

SSO only

The following sequence diagram presents communication between browser, IDP and SP while performing the SSO for a user that has the identity federation between IdP and SP already established i.e. it is NOT the first time that user requests SP access from IDP.
SSO initiation
From the IdP perspective the scenario is almost the same as in previous case. After the user has authenticated and clicked the hyperlink IdP generates assertion and posts it to the SP. The only difference is that this time the IdP doesn’t need to generate the persistent identifier, because it has been generated previously.

The assertion will have the same format as the one send in previous scenario. The content of the assertion will also be the same except the attribute values that rely on current date & time e.g. assertion expiration condition

Assertion processing
In this scenario the Assertion Consumer Service exposed by SP that receives the assertion should ensure that the federation between identities has been previously established and then perform SSO. The default way of doing this when using OpenAM as the SP consists of following steps:

  1. Validate the received assertion i.e. check digital signature and specified assertion conditions.
  2. Extract the persistent identifier from the assertion and search for an SP identity that has this token associated. In this use case there will be 1 result returned i.e. the user that has been previously linked to the Idp user.
  3. Log in the found user using SP`s default authentication mechanism.
  4. Retrieve the RelayState value (url) from request and redirect the user to that url.

Idp Initiated logout

The following sequence diagram presents communication required to perform user logout on IdP and SP.
SLO initiation
This use case assumes that identity federation has been previously established between IdP and SP and the user is authenticated with ProviderDashboard (IDP) and IssueReporter (via SSO). On ProviderDashboard site there is a logout hyperlink pointing to IDPSloInit service that initiates Single Log Out process.

When the user clicks that logout hyperlink OpenAM generates SAML logout request that is then sent to IssueReporter by HTTP REDIRECT. The Http response returned for the request generated by clicking the logout link will look as follows:

HTTP/1.1 302 Moved Temporarily
Location: http://?SAMLRequest=&RelayState=
Content-Type: text/html
(...)
The following table describes response segments that depend on IDP and SP configuration:
Segment name Description
SP_logout_service Url of an SP endpoint that receives and processes logout requests.
SAML_response Base64 encoded and zip compressed SAML logout request i.e. content of the file attached above.
final_desitnation Final SP destination as specified in hyperlink.

SP logout
When the browser receives that response it will redirect the user to SP logout service and pass the logout request. The default request processing on SP side looks as follows:

  1. Validate the SAML request i.e. check issuer
  2. Extract the persistent identifier from the logout request and search for an SP identity that has this token associated. In this use case there will be 1 result returned
  3. Ensure that the found user is currently logged in using SP authentication mechanism e.g. by checking cookies attached to the request by browser
  4. Logout the user using SP authentication mechanism e.g. delete session and destroy authentication cookies
  5. Generate logout confirmation response

Logout confirmation
When the logout confirmation is generated the SP logout service sends it back to IdP using again HTTP REDIRECT:

HTTP/1.1 302 Moved Temporarily
Location: http:///IDPSloRedirect/metaAlias/idp?SAMLResponse=&RelayState=
Content-Type: text/html
(...)
#Possible headers destroying auth cookies for SP
The following table describes response segments that depend on IDP and SP configuration:
Segment name Description
openam_deployment_url Url of OpenAM deployment used in ProviderDashboard
logout_response Base64 encoded and zip compressed SAML logout response.
final_desitnation Final SP destination as specified in hyperlink.

Once Idp OpenAm receives the logout response generated by SP it performs the following steps:
  1. Check if request Id included in response is correct
  2. Ensure the status code is “Success”
  3. Logout currently logged IdP user
  4. Extract RelayState value from response and redirect user to final destination

Idp Initiated federation termination

The following sequence diagram presents communication between browser, IDP and SP required to terminate the identity federation established previously.
This use case assumes that identity federation has been previously established between IdP and SP and the user is authenticated with ProviderDashboard (IDP). The communication between browser, IdP & SP is exactly the same as in SLO usecase described above, with the main difference being the service used i.e. IDPMniInit.

SP identity federation termination
On SP side termination consists of following steps:

  1. Validate the request i.e. check issuer
  2. Extract the persistent identifier from the request and search for an SP identity that has this token associated. In this use case there will be 1 result returned
  3. Delete the association between that user and the persistent identifier
  4. Generate confirmation response

IdP identity federation termination
Once Idp receives the termination confirmation generated by SP it performs the following steps:

  1. Check if request Id included in response is correct
  2. Ensure the status code is “Success”
  3. Retrieve the request data and extract the persistent identifier used
  4. Terminate association between IdP user and that identifier
  5. Extract RelayState value from response and redirect user to final destination

Troubleshooting

All use cases described in this chapter rely on Http protocol and communication between user browser and SAML services. If you run into troubles while configuring SAML with OpenAM or just are get interested how all the SAML messages generated by OpenAM look like I would strongly recommend you to start with this tutorial. It describes developer tools that can be used to trace the browser communication, extract messages etc.

OpenAM Support & consultancy

I did my best to make this OpenAM tutorial as complete as possible, so you can configure SSO and Identity Federation by yourself. Due to very limited amount of free time I'm not able to respond to your questions in comments or emails. However, if you still need additional explanation or your scenario differs from what I described you can hire me for consultancy (contact me).

Previous chapter: Using OpenAM SAML services

Friday, 22 June 2012

IdP initiated SSO and Identity Federation with OpenAM and SAML - part III

This is the third part of the tutorial describing how to configure IdP initiated SSO and Identity Federation with OpenAM and SAML. The tutorial consists of 4 parts:
  1. Basic concepts & use case overview
  2. Sample environment configuration with OpenAM
  3. Using OpenAM SAML services
  4. Detailed look at SAML interactions
If you don't understand any terms or abbreviations that I use here please read the first part of the tutorial together with the Security Assertion Markup Language (SAML) V2.0 Technical Overview.

Using OpenAM SAML services

Having IdP and SP environments configured it’s time to make use of the SAML functionality exposed by OpenAM. OpenAM deployment includes several services that allow developers to easily configure the entire Identity Federation and SSO. Those services are available directly at openam base url and can be access by regular hypelinks from within your web applications.

In this chapter I will describe each service and show how to make use of them.

IDPSSOInit - Identity Federation and SSO service

This service is used to initiate both Identity Federation and SSO. If the link is clicked for the first time by current IdP user the Identity Federation process will be invoked and then SSO. Otherwise only SSO process will be invoked.

The service takes following parameters:

Param name Description Sample value
metaAlias IdP MetaAlias value, by default “/idp”. To ensure about the correct value navigate to hosted IdP configuration screen. MetaAlias will be defined in Services tab. /idp
spEntityID Value of the name given to your Service Provider. Usually SP OpenAM url. http://www.sp.com:8090/openam
binding Binding type used for sending SAML assertions. Available Bindings: HTTP-Artifact & HTTP-POST urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST
RelayState The target URL on SP side. User will be redirected to that url after SSO is completed. http://www.reporter.sp.com:8020/issuereporter

Sample HREF attribute value for the SSO initiation link could look as follows:

http://www.idp.com:8080/openam/idpssoinit
?metaAlias=/idp
&spEntityID= http://www.sp.com/openam
&binding=urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact
&RelayState=http://www.reporter.sp.com:8020/issuereporter

IDPSloInit - Single Log Out

This service is used to initiate Single Logout (SLO). It allows logging out the user from both IdP and SP with a single click.

The service requires following parameters:

Param name Description Sample value
binding Binding type used for logout request. Available Bindings: HTTP-Redirect & SOAP urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect
RelayState The target URL to be used after logout http://www.dashboard.idp.com:8010/providerdashboard/logout

Sample HREF attribute value for the logout link could look as follows:

http://www.idp.com:8080/openam/idpsloinit
?binding=urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect
&RelayState=http://www.dashboard.idp.com:8010/providerdashboard/logout

IDPMniInit - Federation management service

This service can be used to terminate the relation between accounts that was established during initial Identity Federation. After it is invoked Identities will need to be federated again during the next SSO.

The service requires following parameters:

Param name Description Sample value
metaAlias IdP MetaAlias value, by default “/idp”. To ensure about the correct value navigate to hosted IdP configuration screen. MetaAlias will be defined in Services tab. /idp
spEntityID Value of the name given to your Service Provider. Usually SP OpenAM url. http://www.sp.com:8090/openam
binding Binding type used for termination request. Available Bindings: HTTP-Redirect & SOAP urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect
RelayState The target URL to be used after termination is completed http://www.dashboard.idp.com:8010/providerdashboard
requestType In order to terminate the relation use “Terminate”. The service also supports “NewID” but it is not explicitly used in our use case. Terminate

Sample HREF attribute value for the federation termination link could look as follows:

http://www.idp.com:8080/openam/idpmniinit
?metaAlias=/idp
&spEntityID= http://www.sp.com/openam
&binding=urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect
&RelayState=http://www.dashboard.idp.com:8010/providerdashboard
&requestType=Terminate

How to use OpenAM SAML services?

As mentioned before, all you have to do to use OpenAM SAML services is to create hyperlinks within your web applicaiton pointing to them. In our use case the body of a sample web page for ProviderDashboard could look as follows:

Provider Dashboard

Report an issue Terminate federation Logout
The page above contains 3 hyperlinks:
  1. Report an issue - initiates Identity Federation and SSO with IssueReporter
  2. Terminate federation - terminates Identi Federation established with IssueReporter app. Will only work if the federation has been previously established, otherwise it will cause an error
  3. Logout - initiates logout from both ProviderDashboard and IssueReporter
Copy the hyperlinks and place them anywhere within you sample ProviderDashboard web application. Now it's time to verify if our solution works.

Solution verification

Identity Federation
  1. Navigate to http://www.dashboard.idp.com:8010/providerdashboard
  2. Log in as user '12345'
  3. Click on the 'Report an issue' link
  4. Because you are doing this for the first time you will be redirected to IssueReporter login screen.
  5. Login to IssueReporter using 'filip' account. OpenAM will establish federation between IdP and SP accounts (i.e. between users '12345' and 'filip).
  6. You should be redirected to your IssueReporter app
Single Logout
  1. Go back to http://www.dashboard.idp.com:8010/providerdashboard
  2. Click the 'Logout' link
  3. You should be redirected to IdP OpenAM login screen
  4. Try to access http://www.reporter.sp.com:8020/issuereporter
  5. You should be redirected to SP OpenAM login screen
Federation termination
  1. Navigate to http://www.dashboard.idp.com:8010/providerdashboard
  2. Log in as user '12345'
  3. Click on the 'Terminate Federation' link
  4. Click on the 'Report an issue' link
  5. Because you terminated the original federation you will be redirected to IssueReporter login screen so you can establish new federation.
  6. Login to IssueReporter using 'filip' account to recreate the original federation. OpenAM will establish federation between IdP and SP accounts again (i.e. between users '12345' and 'filip).
  7. You should be redirected to your IssueReporter app
Congratulations! You have now configured working example of IdP initiated SSO and Identity Federation with OpenAM and SAML. But are you really sure what is goin on behind the scenes? In the next chapter I will explain the SAML communication and messages exchanged between IdP and SP in details.

Previous chapter: Sample environment configuration with OpenAM
Next chapter: Detailed look at SAML interactions

Thursday, 21 June 2012

IdP initiated SSO and Identity Federation with OpenAM and SAML - part II

This is the second part of the tutorial describing how to configure IdP initiated SSO and Identity Federation with OpenAM and SAML. The tutorial consists of 4 parts:
  1. Basic concepts & use case overview
  2. Sample environment configuration with OpenAM
  3. Using OpenAM SAML services
  4. Detailed look at SAML interactions
If you don't understand any terms or abbreviations that I use here please read the first part of the tutorial together with the Security Assertion Markup Language (SAML) V2.0 Technical Overview.

Sample environment configuration with OpenAM

Now that you know what we will try to achieve in this tutorial let's try to configure our test environment.

Prerequisites

Web containers
Our test environment will consists of 2 instances of OpenAM, each protecting one web application. The first instance will act as an Identity Provider (IdP) and the second as a Service Provider (SP). This gives us 4 web containers (I used Tomcats 6.x) that I've installed on a single machine using different ports :
  • 8010 - tomcat hosting sample ProviderDashboard application
  • 8020 - tomcat hosting sample IssueReporter application
  • 8080 - tomcat hosting OpenAM protecting ProviderDashboard and acting as IdP
  • 8090 - tomcat hosting OpenAM protecting IssueReporter and acting as SP
OpenAM default configuration
In this tutorial I assume you already have all those tomcats prepared. This means you have some sample applications deployed that represent ProviderDashboard and IssueReporter web applications. Those webapps should also be protected by OpenAM agents communicating with appropriate OpenAM instance. However, at this stage there is no SSO or identity federation configured for those instances (that is what we need to do).

If you don't know how to deploy OpenAM please reffer to the following guide: How to deploy OpenAM.
For now, as ProviderDashboard and IssueReporter you can use any Hello World wabapp. When configuring OpenAM agents please create a realm called "test", as this is what we'll be using in this tutorial.

In your OpenAM instances you should also have registered users, that we use in our use case. This means in IdP OpenAm you should have a user "12345" and in SP there should be a user "filip".

Hosts
My hosts file has 4 different host names set all pointing to 127.0.0.1, so I can access all of the tomcats using different hosts names. The following table summarizes urls I use in this tutorial:

App Url Openam Url Description
http://www.dashboard.idp.com:8010/providerdashboard http://www.idp.com:8080/openam ProviderDashboard application and OpenAM instance protecting it
http://www.reporter.sp.com:8020/issuereporter http://www.sp.com:8090/openam IssueReporter application and OpenAM instance protecting it

If you are using Windows you can configure those hosts by adding the following line to the file
C:\Windows\System32\drivers\etc\hosts
127.0.0.1 www.dashboard.idp.com www.reporter.sp.com www.idp.com www.sp.com

Once you have all the prerequisites fulfilled you can start configuring your SAML communication. This requires configuration changes in both OpenAM instances.

Hosted Identity Provider

First, we will start by configuring hosted Idp in ProviderDashboard OpenAM
  1. Navigate to http://www.idp.com:8080/openam
  2. Login as amadmin to OpenAM web console
  3. On the main screen (“Common tasks” tab) choose “Create Hosted Identity Provider” link from “Create SAMLv2 Providers” section
  4. The following form appears:
    The fields to be populate are marked with red numbers
  5. Populate the form:
    1. Realm
      Select the 'test' realm. Each IdP is directly related to a Realm.
    2. Name – unique name of your IdP
      You can use OpenAM instance url as the name assuming you will only have 1 IdP per instance. If you want to have more IdPs per OpenAM instance use realm names
    3. Signing key
      If you want to digitally sign all your SAML messages select a signing key. OpenAM offers a test key to be used for testing purposes. For production needs you’ll have to generate a new one.
    4. Circle of trust
      Provide name for your circle of trust. All SAML providers that want to communicate with each other need to belong to the same circle of trust.
    5. Attribute mapping
      If IdP and SP identity stores have different schema but store the same kind of information you can define an explicit mapping between them. E.g. Email address can be stored in IdP as ‘email’ and in SP as ‘mailAddress’. OpenAM suggest you attributes available for your IdP.
  6. Click ‘Configure’ button
  7. On the confirmation screen click ‘Finish’

Remote Identity Provider

Next, we will register IdP created in previous step as a remote IdP in IssueReporter OpenAM:
  1. Navigate to http://www.sp.com:8090/openam
  2. Login as amadmin to SP OpenAM web console
  3. On the main screen (“Common tasks” tab) choose “Register Remote Identity Provider” link from “Create SAMLv2 Providers” section
  4. Populate the form:
    • Url of metadata
      Use the following format: http://<idp-openam-url>/saml2/jsp/exportmetadata.jsp
    • Circle of Trust
      Provide the same name as used on IdP side
  5. Click ‘Configure’ button

Hosted Service Provider

Now, it is time to configure an SP hosted in IssueReportr OpenAM:
  1. Navigate to http://www.sp.com:8090/openam
  2. Login as amadmin to SP OpenAM web console
  3. On the main screen (“Common tasks” tab) choose “Create Hosted Service Provider” link from “Create SAMLv2 Providers” section
  4. Populate the form:
    • Name – use url of this SP OpenAM instance
    • Circle of Trust – select the same one as for IdP
    • Use default attribute mapping from Identity Provider - checked
  5. Click ‘Configure’ button

Remote Service Provider

Next, we will register remote SP in ProviderDashboard OpenAM
  1. Navigate to http://www.idp.com:8080/openam
  2. Login as amadmin to OpenAM web console
  3. On the main screen (“Common tasks” tab) choose “Register Remote Service Provider” link from “Create SAMLv2 Providers” section
  4. The following form appears:
  5. Populate the form:
    1. Realm
      Select the same realm as for IdP.
    2. Url of metadata
      If another instance of OpenAM is used as SP then the url pointing to the service metadata has following format: http://<sp-openam-url>/saml2/jsp/exportmetadata.jsp
      In our case it is: http://www.sp.com:8090/openam/saml2/jsp/exportmetadata.jsp
    3. Circle of trust
      Select the same as used for IdP
    4. Attribute mapping
      If required use the same mappings as in IdP
  6. Click ‘Configure’ button

General configuration

You can always edit providers defined in previous steps. To do that:
  1. Navigate to either SP or IdP OpenAM and login as admin to OpenAM web console
  2. Click on ‘Federation’ tab
  3. You should see the screen listing all defined circles of trust and all entities (IdPs and SPs). Sample sreen for IdP OpenAM instance:
  4. Click on the entity you’d like to update e.g. hosted IdP
  5. You will be redirected to the screen where you can update default and advanced entity configuration

Environment setup validation

At this stage our test environment should be ready to perform SAML Identity Federation and Single Sign On between our sample ProviderDashboard and IssueReporter applications. In order to validate the setup perform following steps:
  1. Navigate to http://www.idp.com:8080/openam
  2. Login as amadmin to OpenAM web console
  3. On the main screen (“Common tasks” tab) choose “Test Federation Connectivity”
  4. Select circle of Trust (COT) that you'd like to test
  5. The following screen should appear:
    You can select IdP and SP that you'd like to perform the test for and click 'Start Test' button
  6. A warning will be displayed that the user will be logged out - click 'OK'
  7. Now the actual test begins. It consists of following steps:
    1. Authentication for Identity Provider, http://www.idp.com:8080/openam
    2. Authentication for Service Provider. http://www.sp.com:8090/openam
    3. Testing for the ability to link account
    4. Testing for single logout
    5. Testing Single Sign On.
    6. Testing for account termination
    You will be guided through the entire test. While testing account linking you will be asked to provide credentials for the Idp (12345/password) and then for the SP (filip/password). Next it will perform SSO test which requires authentication with Idp credentials once again. At the end of the test you should see a success message.
If everything worked fine you should be able to start using SAML functionality exposed by OpenAM in your applications. Please note that although your test environment is correctly configured the sample web applications are not making use of SAML features yet.

In the next chapter I will show how to make use of SAML features exposed by OpenAM in your web applications.

Previous chapter: Basic concepts & use case overview
Next chapter: Using OpenAM SAML services

Wednesday, 20 June 2012

Add-PSSnapin : Cannot load Windows PowerShell snap-in (...) This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.

Lately I've been working on custom Powershell Cmdlets. I created my custom Snapin, installed it using InstallUtil and then tried to execute Add-PSSnapin. Unfortunetally I got the following error:

Add-PSSnapin : Cannot load Windows PowerShell snap-in because of the following error: Could not load file or assembly or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.

Solution

This may happen if you work on your Cmdlets on 64 bit Windows, like I did. To solve that create a file called powershell.exe.config with the following content:
 
     
        
         
     
Place the file in the Powershell folder, close and reopen PS console and you are ready to go. The location of Powershell folder differs, depending if you want to use 32 or 64 bit version:
32 bit - %windir%\System32\WindowsPowerShell\V1.0 
64 bit - %windir%\Syswow64\WindowsPowerShell\V1.0

Monday, 18 June 2012

IdP initiated SSO and Identity Federation with OpenAM and SAML - part I

In this tutorial I will describe how to configure IdP initiated SSO and Identity Federation with OpenAM and SAML. The tutorial consists of 4 parts:
  1. Basic concepts & use case overview
  2. Sample environment configuration with OpenAM
  3. Using OpenAM SAML services
  4. Detailed look at SAML interactions

Basic concepts

In this section I will describe basic concepts and terms used in this tutorials. It is important that you understand them before you start configuring the end solution.

OpenAM

OpenAM is Access Manager that evolved from OpenSSO after SUN abandoned it. It provides open source Authentication, Authorization, Entitlement and Federation software. This tutorial assumes you have basic knowledge about OpenAM and the functionalities it offers. You should be able to deploy and initially configure OpenAM instance on you local machine (not covered in this tutorial).

More on OpenAM on their website: http://www.forgerock.com/openam.html.

SAML

Security Assertion Markup Language (SAML) standard defines an XML-based framework for describing and exchanging security information between on-line business partners. Currently OpenAM supports SAML in version 2. The best way to start learning about SAML v2 is the Security Assertion Markup Language (SAML) V2.0 Technical Overview. This document covers all the following basic concepts described in my tutorial.

Web Single Sign On

Web SSO is the approach allowing single sign on for multiple web applications that have established a common agreement on how to exchange user information. The end users provide their credentials only once and they are recognized by all of the webapps, even if they are deployed in different domains and use different identity stores. SSO also allows usage of single identity store by all of the webapps.

Identity Federation

Identity federation is a process of linking users defined in different identity stores. Such link allows implementation of Single Sign On. What is important from privacy perspective is that in order to establish federation both parties do not have to know anything about user attributes from different identity stores.

Use case overview

This section describes the use case we will try to implement in this tutorial. I believe in learning by example so let's describe our use case using one:

I've recently registered a new internet domain by one of online providers. The provider offers a web based customer dashboard - let's call it the ProviderDashboard. I can use my customer number (12345) and password to log into that dashboard to see the list of my domains, invoices etc.

My provider has also an agreement with an external web application that offers reporting technical issues (e.g. when the domain is not available) - let's call it the IssueReporter. I already have an existing account at that website, because I used it in the past for other reasons. My login for that website is "filip".

So, whenever I log into the ProviderDashboard I have a link called "Report an issue" that takes me directly to the IssueReporter app. After I click the link I'm automatically loged into the IssueReporter app using the correct username i.e. filip. However, this relation is not symetricall - if I log in directly into IssueReporter I will not be automatically loged into ProviderDashboard.

So, let's have a look at the use case flow in detailed steps (happy path):

  1. I log into ProviderDashboard using account number 12345 and password
  2. I click on the link within the dashboard called "Report an issue"
  3. I'm redirected redirected to IssueReporter login screen
  4. I provide valid IssueReporter credentials i.e. filip as username
  5. My ProviderDashboard account (with identifier 12345) is linked to IssueReporter account for filip
  6. I am redirected to IssueReporter application and automatically logged in using filip account
Important: steps described using green font only happen during the first time I click on the link and belong to Identity Federation process. All subsequent redirections from ProviderDashboard to IssueReporter will not require those to be completed. All remaining steps are part of usual SSO process.

SAML terminology

In the scenario described above ProviderDashboard application acts as an Identity Provider (IdP) whereas the IssueReporter acts as Service Provider (SP). IdP and SP are terms defined in SAML and OpenAM also use them.

Our use case reflects “IdP initated SSO” scenario described in details in SAML Technical Overview document linked earlier. In general: IdP produces assertions about the user identity and passes them to SP. IdP also initiates Identity Federation when required i.e. when the link is clicked for the first time.

The following diagram presents IdP initiated SSO. It doesn’t cover Identity Federation actions (assume identities are already federated):

Note: On the diagram SAML assertions about identity are passed from IdP to SP using http POST request (steps 4&5). It is also possible to use SAML artifacts instead. When using SAML artifacts IdP passes to SP only artifact id instead of the entire identity assertion. SP obtains then the assertion by making a SOAP call to IdP and passing the artefact id.

In the next part of this tutorial I will describe how to configure sample environments using OpenAM for both IdP and SP.

Next: Sample environment configuration with OpenAM

Can't connect remotly to Sql Server 2008 R2

By default, after the installation of Sql Server 2008 R2 remote access to database server is disabled and you can only access it locally. In order to change that open Sql Server Configuration Manager, expand "Sql Server Network configuration" node, select server instance (default is MSSQLSERVER) and check the status for TCP/IP:



In order to alow remote access double click on entry for TCP/IP protocol and change the value of "Enabled" property to "true". Save your changes and restart the service.

Thursday, 19 January 2012

How I didn't get a Windows Phone 7 for free

Some time ago Codeguru.pl (polish portal for developers, staying in very close relations with Microsoft) announced a competition called Geek Club. The basic rule was quite simple: write 5 apps for Windows Phone and get an actaul device for free. There were 2 additional requirements:
  1. In order to be accepted by Codeguru the app needs to be published on Marketplace first
  2. The app needs to make use of at least 2 listed features (like GPS, SQL CE data storage etc)
At first I didn't think about participating, mainly because of 3 reasons:
  • Unless you're a student, you have to pay 100$ to be able to publish apps to Marketplace
  • Didn't really believe in WP7 (low market share, pessimistic prognosis)
  • Writing 5 good apps seemed to be a lot of effort, especially that I don't know silverlight
However, after Maciej Grabek gave us a presentation on Windows Phone development and showed how easy it is I decided to give it a try. Also, I started to hear more & more positive opinions abut WP7 from my colleagues. The turning point was when I learned that the apps don't have to be good at all! :) They don't need to do anything useful or even funny, as long as they fulfill the 2 requirements presented above. This means I could treat the whole challenge as a good learning experience rather than a serious app development project. I didn't have to care much about functionality, like I normally would when working an application. Instead, I could focused on technical details, so it was rather a technology evaluation project.

Having said all that about good learning opportunity etc. I must admit I was still hoping to get that free phone :) But I didn't... I submitted my 5 apps (even 6 just in case) at the end of last year, but till now only 2 got checked by Codeguru team and since yesterday there are no more phones (the pool was limited). Apparently there was much more apps submitted than Codeguru team could tests.

Anyway, I'm still glad I took part in that competition. Here are most important benefits:
  • I've learned fundamentals of WP7 development, Silverlight basics, app lifecycle & Marketplace submission process
  • I know how to make use of basic features: touch screen interface interaction, GPS, Accelerometer, Microphone, Internal Storage (SQL CE), playing sounds, Bing Maps, ...
  • Got convinced that WP7 platform is actually quite nice and userfriendly
  • I can exchange the points I've earned for my apps (or will earn once Codeguru finally tests them) for other prizes like free Microsoft exams, Office Suite etc.
  • I had lots of fun :)
Now for the bad part:
  • The competition lacked transparency. Theoretically the apps to be tested by Codeguru team were put into a FIFO queue. However, there were multiple complains from developers saying that they are waiting for any response for a long time while others, who submitted their apps later, already know their results. The submission process did not leave any trace of your submission (no confirmation email, just generic message on a website) and there was no tracking system. As result people didn't know what was happening with their apps.
  • Too little testers - since I'm still waiting for my apps to be tested I assume they have not enough resources and the they didn't expect such high interest
  • Because of the competition rules the Marketplace was flooded with crappy, useless apps that were created just to get the phone (including some of my apps I must admit)
To summarize: I think we should still be grateful to Codeguru that they organized this and offered us a very motivating way to learn. Many of participants would have never learned how easy WP7 development is if there were no competition like this. However, next time they organize a similar contest they should focus on transparency and provide enough resources to manage, what they created. After all, they represent Microsoft.

PS. The other interesting fact I've learned is that the more stupid your app is, the more downloads you'll get :D

PS2. All my apps created for that competition are available on my Marketplace site. Guess which one is the most popular?