Wednesday, 17 January 2018

Object Recognition using Microsoft CNTK

Lately I've been working on some advanced object recognition scenarios. I was looking for appropriate tools to do the job. Obviously I thought about using some kind of deep learning solution, but I was afraid of its complexity. I'm not a data scientist and have a very limited knowledge of neural networks. I needed a tool, which I could use as a black box: feed some data into it and then consume the result, without even understanding what's going on under the hood.

I was advised by domain experts to check out the Microsoft Cognitive Toolkit (CNTK). The big advantage of CNTK is rich documentation and end-2-end examples available. In my scenario (Object Recognition) I found following resources particularly useful:

  • Object Detection using CNTK

    This tutorial was my entry point. You will find there exact, step-by-step instructions on how to build an object recognition solution together with sample data sets. It also provides some scientific background for those who want to learn how this works.

  • End-2-end solution

    This one bases on the original tutorial from the first link. However, it takes it further by introducing the complete E2E solution, including:

    • managing reference pictures
    • building repository for metadata
    • training the object recognition model
    • managing training results
    • Advanced reporting based on PowerBI
    • Publishing the CNTK model as a web service, so recognition results can be easily consumed.

  • Upgrade to Faster R-CNN

    First 2 tutorials are using an algorithm called Fast R-CNN. It is good, but CNTK released also its improved version called Faster R-CNN. This tutorial provides scripts that utilize that improved algorithm. Because it bases on the same sample dataset, you can simply compare results from both tutorials. From my experience, Faster R-CNN is not actually quicker, but it provides better recognition rates.

After going through these 3 you should be able to build an object recognition solution on your own, without having any data-science background.

Wednesday, 19 October 2016

Force Service Fabric application delete

Usually you can easily delete an application from your Service Fabric cluster using the cluster explorer. However, there are times when this doesn't work - this may happen if one of services breaks and it can't heal itself.
In such situation you can use following PowerShell script to force the delete operation:
Connect-ServiceFabricCluster -ConnectionEndpoint <your-cluster-connection-endpoint>
$nodes = Get-ServiceFabricNode
foreach($node in $nodes)
{
    $replicas = Get-ServiceFabricDeployedReplica -NodeName $node.NodeName -ApplicationName "fabric:/Your-App"
    foreach ($replica in $replicas)
    {
        Remove-ServiceFabricReplica -ForceRemove -NodeName $node.NodeName -PartitionId $replica.Partitionid -ReplicaOrInstanceId $replica.ReplicaOrInstanceId
    }
}

Monday, 28 September 2015

VSO Build vNext – versioning assemblies with MSBuild

In this blog post I will show you how to version assemblies during the Visual Studio Online vNext build.

Why to version assemblies?

Versioning assemblies will help you to detect, which library version has been deployed to your environment. You can obviously do it manually for each of your projects, but it can be a mundane and error prone task. Instead, I’ll show you how to easily automate the process.

In result, each project library will be automatically versioned. The version number format will follow the recommended standard:

[Major version[.Minor version[.Build Number[.Revision]]]]

How to version assemblies?

By default the assembly version is defined in the AssemblyInfo.cs class, which can be found in the Properties folder.
[assembly: AssemblyVersion("1.0.0.0")]
If you have many projects it’s much easier to have a single file with all common assembly information shared by all projects. This way you will only need to update the property once and it will be picked up by all projects.

To do that create a separate file called CommonAssemblyInfo.cs and place it the solution’s root folder. Move the AssemblyVersion definition to that file. Then, link the file from all projects:

  1. Right click project
  2. Add Existing…
  3. Select the created CommonAssemblyInfo.cs
  4. Click small arrow next to the Add button
  5. Select Add as Link
  6. Drag the linked file to the Properties folder.
The result should look like this:
Obviously you can move more common properties to the CommonAssemblyInfo file e.g. AssemblyCompany, AssemblyCopyright etc.

Automate versioning

Now that we store the assembly version in a single file we can easily automate the versioning process. The idea is that before executing the actual build we will run another MSBuild script that updates the version in the Common file. The script will be using the AssemblyInfo target task from the MSBuild Community tasks.

The script takes 2 parameteres: BuildId & Revision:

<?xml version="1.0" encoding="utf-8" ?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <!-- path to MSBuild community tasks --> 
    <MSBuildCommunityTasksPath>.</MSBuildCommunityTasksPath> 
  </PropertyGroup>
  <Import Project=".\tasks\MSBuild.Community.Tasks.Targets"/>
  <Target Name="UpdateAssemblyInfo">
    <Message Text="Updating Assembly versions with Build Id $(BuildId) and Revision $(Revision)"></Message>
    <!-- update assembly and file versions in C-Sharp CommonAssemblyInfo  --> 
    <AssemblyInfo OutputFile="..\CommonAssemblyInfo.cs"
                     CodeLanguage="CS"
                     AssemblyVersion="1.0.$(BuildId).$(Revision.Replace('C',''))"
                     AssemblyFileVersion="1.0.$(BuildId).$(Revision.Replace('C',''))" >                  
    </AssemblyInfo>
  </Target>
</Project>
The script must be added to your source together with community tasks files, so you can reference it in your VSO build definition.

VSO Build definition

Now that we have all components it’s time to define the VSO build. The first step would be execution of our custom build task:
As you can see we use 2 environment variables here:
  • $(Build.BuildId) – id of the build
  • $(Build.SourceVersion) - The latest version control change set that is included in this build .If you are using TFS it will have the format “C1234”, so you need to remove the “C” prefix (see our build script above)
Then, we can use the regular MSBuild step to build the solution. All assemblies that have the CommonassemblyInfo.cs file linked should have the correct version number set. Now you can add more steps to the build definition: running unit tests, publishing artifacts, etc.

Alternative approach

You can also achieve the same functionality using PowerShell instead of MSBuild. There is a good example here. Which one you choose is up to your personal preference – I prefer my solution, as it requires less code.

Tuesday, 4 November 2014

SharePoint 2013 sticky footer

Adding a footer to SharePoint masterpage may be a bit tricky, since SharePoint automatically recalculates height and scrolling properties of some default div containers on page load. Today I will show how to add a so called "sticky footer" to a SharePoint masterpage using Javascript. The sticky footer will be displayed always at the bottom of the page, even if there is little content. We will base on the SharePoint 2013 "Seattle" masterpage.

Masterpage structure changes

First we need to add a footer container (div) to our masterpage, that will contain the footer content. We add this at the end of the default "s4-workspace" div, right after the "s4-bodyContainer" div:
<div id="s4-workspace" class="ms-core-overlay">
    <div id="s4-bodyContainer">
    (...)
    </div>
    <div id="footer">Your footer content goes here</div>
</div>
Now you need to populate your footer with content and set its CSS properties e.g. height.

Javascript code

Now that we have our footer container let's position it with some javascript code:
// generic function for resizing elements within their containers
function adjustContainerHeightToObject(container, content, offset){
 var container = $(container);
 var content = $(content, container);
 if (container.height() > content.height()) {
  content.height(container.height() + offset);
 }
}

// specific function for resizing the s4-body container
function resizeMainContent(){
    // as offset we provide the negative value of the height of our footer
    adjustContainerHeightToObject('#s4-workspace', '#s4-bodyContainer', -50); // for footer with 50px height
}

// call resize function on page load
_spBodyOnLoadFunctionNames.push("resizeMainContent");

_spBodyOnLoadFunctionNames.push() vs. $(document).ready()

Notice that instead of using regular jQuery ready() event we are using SharePoint's custom mechanism for calling function after the page loads. This will ensure that all SharePoint resizing code has already executed when calling our function.

Wednesday, 13 August 2014

SSIS Error: The requested OLE DB provider Microsoft.Jet.OLEDB.4.0 is not registered. If the 64-bit driver is not installed, run the package in 32-bit mode.

I have a SSIS solution that reads from an Excel file. I recently deployed it to a different server and tried executing it from Visual Studio. I got the following error:

[Excel Source [2]] Error: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0209303. There may be error messages posted before this with more information on why the AcquireConnection method call failed.

[Connection manager "Excel Connection Manager"] Error: The requested OLE DB provider Microsoft.Jet.OLEDB.4.0 is not registered. If the 64-bit driver is not installed, run the package in 32-bit mode. Error code: 0x00000000. An OLE DB record is available. Source: "Microsoft OLE DB Service Components" Hresult: 0x80040154 Description: "Class not registered".

Solution

As the error message suggests you need to run the package in the 32-bit mode. To do that:
  1. Right-click your solution
  2. Click 'Properties'
  3. Select 'Debugging' node
  4. Set 'Run64BitRuntime' property to False
  5. Save and re-run your solution

Tuesday, 22 April 2014

SSIS integration with Dynamics CRM using ExecuteMultipleRequest for bulk operations

There are several tutorials on the Web explaining how to integrate SSIS with Dynamics CRM using the script component. All of them however show you only the basic setup, where records from a data source are processed 1 by 1 when executing CRM commands (e.g. creating CRM records). In this post I would like to show you have to leverage the ExecuteMultipleRequest class from CRM SDK to create bulk operations for records from the SSIS data source.

Tutorial scenario

  1. At first we will create a simple database with 1 table that stores user names
  2. Then we will create an SSIS project
  3. Next, we will add our db table as data source, so SSIS can read information about users
  4. Then, we will add a script component that creates contacts in CRM for each user from the table
  5. Finally, we will modify the script to import CRM contacts in batches
  6. At the end we will compare execution time of both scripts

Basic setup

Database
Let's create a basic db table with only 2 columns:
CREATE TABLE Users (
 FirstName VARCHAR(100) NOT NULL,
 LastName VARCHAR(100) NOT NULL
 )
Now populate your table with some dummy data, in my case I've added 1000 records.

SSIS project
  1. Open "Sql Server Data Tools" (based on Visual Studio 2010)
  2. Got to File -> New -> Project...
  3. Select "Integration Services Project", provide project name and click OK
  4. When the project is created add a Data Flow task to your main package:
Data Source
  1. Double click your Dat Flow task to open it
  2. Double click "Source Assitance" from the toolbox
  3. On the first screen of the wizard select "SQL Server" as source type and select "New..."
  4. On second screen provide you SQL server name and authentication details and select your database
  5. A new block will be added to you Data Flow, representing your DB table. It has an error icon on, cause we haven't selected the table yet. Also, you will see a new connection manager representing you DB connection:
  6. Double click the new block, from the dropdown select the Contacts table we created and hit OK. The error icon should disappear
Script component
  1. Drag and drop the Script Component from the toolbox to you Data Flow area
  2. Create a connection (arrow) from your data source to your script:
  3. Double click your script componet to open it
  4. Go to "Input Columns" tab and select all columns
  5. Go to "Inputs and Outputs" tab and rename "Input 0" to "ContactInput"

1-by-1 import

Now that we have basic components setup let's write some code! In this step we will create a basic code for importing Contacts into CRM. I'm assuming you have basic knowledge of CRM SDK, therefore the CRM specific code will not be explained in details.

Open the script component created in the previous steps and click "Edit Script...". A new instance of Visual Studio will open with a new, auto-generated script project. By default the main.cs file will be opened - this is the only file you need to modify. However, before modyfing the code you need to add references to following libraries:

  • Microsoft.Sdk.Crm.Proxy
  • Microsoft.Xrm.Client
  • Microsoft.Xrm.Sdk
  • Microsoft.Runtime.Serialization
Now we are ready to write the code. Let's start by creating a connection to you CRM organization. This will be created in the existing PreExecute() method like this:
OrganizationService _service;

public override void PreExecute()
{
    base.PreExecute();
        
    var crmConnection = CrmConnection.Parse(@"Url=https://******.crm4.dynamics.com; Username=******.onmicrosoft.com; Password=*********;");
    _service = new OrganizationService(crmConnection);
}
Now that we have the connection created let's write code, that actually imports our contacts to CRM. This can be done be modyfing the existing method ContactInput_ProcessInputRow:
public override void ContactInput_ProcessInputRow(ContactInputBuffer Row)
{
    var contact = new Entity("contact");
    contact["firstname"] = Row.FirstName;
    contact["lastname"] = Row.LastName;
    _service.Create(contact);
}
Obviously the code above requires some null-checks, error handling etc but in general that's all you need to do in order to import your contacts into CRM. If you close the VS instance with the script project it will be automatically saved and built.

You can now hit F5 in the original VS window to perform the actual migration.

Bulk import

In the basic setup described above there is 1 CRM call for each record passed to the script component. Calling web services over the network may be a very time consuming operation. CRM team is aware of that and that is why they introduced the ExecuteMultipleRequest class, which basically allows you to create a set of CRM requests on the client side and send them all at once in a single web service call. In response you will receive an instance of the RetrieveMultipleResponse class, allowing you to process response for each single request.

Let's modify the script code to leverage the power of the ExecuteMultipleRequest class. To do that overwrite the ContactInput_ProcessInput method. The default method implementation can be found in the ComponentWrapper.cs file and it as simple as this:

 public virtual void ContactInput_ProcessInput(ContactInputBuffer Buffer)
{
     while (Buffer.NextRow())
     {
        ContactInput_ProcessInputRow(Buffer);
     }
}
As you can see by default it calls the ContactInput_ProcessInputRow method that we implemented in the previous step for each record from the source. We need to modify it, so it creates a batch of CRM requests and then send it to CRM at once:
List<Entity> _contacts = new List<Entity>();

public override void ContactInput_ProcessInput(ContactInputBuffer Buffer)
{
    int index = 0;
    while (Buffer.NextRow())
    {
        _contacts.Add(GetContactFromBuffer(Buffer));
        index++;

        // Let's use buffer size 500. CRM allows up to 1000 requests per single call
        if (index == 500)
        {
            ImportBatch();
            index = 0;
        }
    }
    ImportBatch();
}

private void ImportBatch()
{
    if (_contacts.Count > 0)
    {
        // Create and configure multiple requests operation
        var multipleRequest = new ExecuteMultipleRequest()
        {
            Settings = new ExecuteMultipleSettings()
            {
                ContinueOnError = true, // Continue, if processing of a single request fails
                ReturnResponses = true // Return responses so you can get processing results
            },
            Requests = new OrganizationRequestCollection()
        };

        // Build a CreateRequest for each record
        foreach (var contact in _contacts)
        {
            CreateRequest reqCreate = new CreateRequest();
            reqCreate.Target = contact;
            reqCreate.Parameters.Add("SuppressDuplicateDetection", false); // Enable duplicate detection 
            multipleRequest.Requests.Add(reqCreate);
        }

        ExecuteMultipleResponse multipleResponses = (ExecuteMultipleResponse)_service.Execute(multipleRequest);            

        // TODO: process responses for each record if required e.g. to save record id

        _contacts.Clear();
    }
}

private Entity GetContactFromBuffer(ContactInputBuffer Row)
{
    Entity contact = new Entity("contact");
    contact["firstname"] = Row.FirstName;
    contact["lastname"] = Row.LastName;
    return contact;
}

Execution time comparison

As you can see the code for sending requests in batches is a bit longer (but still quite simple I believe) so you may be tempted to go with the simpler version. If you don't care about performance too much (little data, no time limitations) then it might be the way to go for you. However, it's always better to know your options and take a conscious decision. SSIS packages usually process large amount of data, which often takes a lot of time. If you add additional step performing CRM operations via CRM SDK (i.e. via CRM web services) you may be sure this will affect significantly the execution time.

I've measured the execution time for both methods. Importing 1000 contacts into CRM took:

  • 1-by-1 - 2:22s
  • Bulk import - 0:44s
In my simple scenario bulk import was 3x faster than 1-by-1. The more data you send to CRM the bigger the difference may be.

Thursday, 17 April 2014

C#: Retrieve user data from Active Directory

This summary is not available. Please click here to view the post.