tag:blogger.com,1999:blog-70417451561995491922024-03-14T03:32:02.861-07:00Filip's Technical BlogFilip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.comBlogger126125tag:blogger.com,1999:blog-7041745156199549192.post-84279320407013192582018-01-17T06:13:00.000-08:002018-01-17T06:16:38.849-08:00Object Recognition using Microsoft CNTKLately I've been working on some advanced object recognition scenarios. I was looking for appropriate tools to do the job. Obviously I thought about using some kind of deep learning solution, but I was afraid of its complexity. I'm not a data scientist and have a very limited knowledge of neural networks. I needed a tool, which I could use as a black box: feed some data into it and then consume the result, without even understanding what's going on under the hood.
<p>I was advised by domain experts to check out the <a href="https://www.microsoft.com/en-us/cognitive-toolkit/" target="_blank">Microsoft Cognitive Toolkit (CNTK)</a>. The big advantage of CNTK is rich documentation and end-2-end examples available. In my scenario (Object Recognition) I found following resources particularly useful:<ul>
<li><a href="https://github.com/Azure/ObjectDetectionUsingCntk">Object Detection using CNTK</a>
<p>This tutorial was my entry point. You will find there exact, step-by-step instructions on how to build an object recognition solution together with sample data sets. It also provides some scientific background for those who want to learn how this works.</li>
<li><a href="https://github.com/Azure/cortana-intelligence-product-detection-from-images">End-2-end solution</a>
<p>This one bases on the original tutorial from the first link. However, it takes it further by introducing the complete E2E solution, including:<ul>
<li>managing reference pictures</li>
<li>building repository for metadata</li>
<li>training the object recognition model</li>
<li>managing training results</li>
<li>Advanced reporting based on PowerBI</li>
<li>Publishing the CNTK model as a web service, so recognition results can be easily consumed.</li>
</ul>
<br/>
<li><a href="https://docs.microsoft.com/en-us/cognitive-toolkit/Object-Detection-using-Faster-R-CNN">Upgrade to Faster R-CNN</a>
<p>First 2 tutorials are using an algorithm called Fast R-CNN. It is good, but CNTK released also its improved version called Faster R-CNN. This tutorial provides scripts that utilize that improved algorithm. Because it bases on the same sample dataset, you can simply compare results from both tutorials. From my experience, Faster R-CNN is not actually quicker, but it provides better recognition rates.</li>
</ul>
After going through these 3 you should be able to build an object recognition solution on your own, without having any data-science background. Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-83142397733090920612016-10-19T04:24:00.004-07:002016-10-19T04:32:24.777-07:00Force Service Fabric application delete Usually you can easily delete an application from your Service Fabric cluster using the cluster explorer. However, there are times when this doesn't work - this may happen if one of services breaks and it can't heal itself. <br />
In such situation you can use following PowerShell script to force the delete operation:
<br />
<pre class="brush: powershell">Connect-ServiceFabricCluster -ConnectionEndpoint <your-cluster-connection-endpoint>
$nodes = Get-ServiceFabricNode
foreach($node in $nodes)
{
$replicas = Get-ServiceFabricDeployedReplica -NodeName $node.NodeName -ApplicationName "fabric:/Your-App"
foreach ($replica in $replicas)
{
Remove-ServiceFabricReplica -ForceRemove -NodeName $node.NodeName -PartitionId $replica.Partitionid -ReplicaOrInstanceId $replica.ReplicaOrInstanceId
}
}</pre>
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-38701296730931376052015-09-28T06:43:00.000-07:002015-09-28T07:46:45.066-07:00VSO Build vNext – versioning assemblies with MSBuildIn this blog post I will show you how to version assemblies during the Visual Studio Online vNext build.
<h2>Why to version assemblies?</h2>
Versioning assemblies will help you to detect, which library version has been deployed to your environment. You can obviously do it manually for each of your projects, but it can be a mundane and error prone task. Instead, I’ll show you how to easily automate the process.
<p>In result, each project library will be automatically versioned. The version number format will follow the recommended standard:
<big><pre><b>[Major version[.Minor version[.Build Number[.Revision]]]]</b></pre></big>
<h2>How to version assemblies?</h2>
By default the assembly version is defined in the <big><code><b>AssemblyInfo.cs</b></code></big> class, which can be found in the Properties folder.
<pre class="brush: csharp;">[assembly: AssemblyVersion("1.0.0.0")]</pre>
If you have many projects it’s much easier to have a single file with all common assembly information shared by all projects. This way you will only need to update the property once and it will be picked up by all projects.
<p>To do that create a separate file called <big><code><b>CommonAssemblyInfo.cs</b></code></big> and place it the solution’s root folder. Move the AssemblyVersion definition to that file. Then, link the file from all projects:<ol>
<li>Right click project</li>
<li>Add Existing…</li>
<li>Select the created CommonAssemblyInfo.cs</li>
<li>Click small arrow next to the Add button</li>
<li>Select Add as Link</li>
<li>Drag the linked file to the Properties folder.</li></ol>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi64uiGJMD84gvvLRk0Yikt8CD1bBJ3TELXEvZnTaTmQv26K7zHVt2LarZ46gLFmxzGRCNSZPiZKrAdhGA4G5CSZFqTz9fZs81PJp9HZlq5y3-QK_-5WOHf-fh7Aavugq387JfcnuFl5jY/s1600/alm_link.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi64uiGJMD84gvvLRk0Yikt8CD1bBJ3TELXEvZnTaTmQv26K7zHVt2LarZ46gLFmxzGRCNSZPiZKrAdhGA4G5CSZFqTz9fZs81PJp9HZlq5y3-QK_-5WOHf-fh7Aavugq387JfcnuFl5jY/s400/alm_link.png" /></a></div>
The result should look like this:
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidFUqWLYm-IUZy2AJ62Wa_wkAqxIwEtBULwe8PY7-my2KaZQk2vR6PNGrtlQgBMdTemPlp4y-_Ph4PhTdbJvxQmeFP-zqmw-2qrdMbasqMZBcccCdZcuJAoWi_MENft3IUhVCYCbRBswI/s1600/alm_linked.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidFUqWLYm-IUZy2AJ62Wa_wkAqxIwEtBULwe8PY7-my2KaZQk2vR6PNGrtlQgBMdTemPlp4y-_Ph4PhTdbJvxQmeFP-zqmw-2qrdMbasqMZBcccCdZcuJAoWi_MENft3IUhVCYCbRBswI/s400/alm_linked.png" /></a></div>
Obviously you can move more common properties to the CommonAssemblyInfo file e.g. AssemblyCompany, AssemblyCopyright etc.
<h2>Automate versioning</h2>
Now that we store the assembly version in a single file we can easily automate the versioning process. The idea is that before executing the actual build we will run another MSBuild script that updates the version in the Common file. The script will be using the AssemblyInfo target task from the <a href="https://github.com/loresoft/msbuildtasks" target="_blank">MSBuild Community tasks</a>.
<p>The script takes 2 parameteres: BuildId & Revision:
<pre class="brush: xml"><?xml version="1.0" encoding="utf-8" ?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<!-- path to MSBuild community tasks -->
<MSBuildCommunityTasksPath>.</MSBuildCommunityTasksPath>
</PropertyGroup>
<Import Project=".\tasks\MSBuild.Community.Tasks.Targets"/>
<Target Name="UpdateAssemblyInfo">
<Message Text="Updating Assembly versions with Build Id $(BuildId) and Revision $(Revision)"></Message>
<!-- update assembly and file versions in C-Sharp CommonAssemblyInfo -->
<AssemblyInfo OutputFile="..\CommonAssemblyInfo.cs"
CodeLanguage="CS"
AssemblyVersion="1.0.$(BuildId).$(Revision.Replace('C',''))"
AssemblyFileVersion="1.0.$(BuildId).$(Revision.Replace('C',''))" >
</AssemblyInfo>
</Target>
</Project></pre>
The script must be added to your source together with community tasks files, so you can reference it in your VSO build definition.
<h2>VSO Build definition</h2>
Now that we have all components it’s time to define the VSO build. The first step would be execution of our custom build task:
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8ElwaEUfo6UnEE0n5BhiExbt9vOuspTdjwUb81ORPuRIVI_dPGHzfoQxza3NzuoIGyM9DLPz1m9bgM1rgOJXGeUyTRfmtaqBKRHD-h1AF_bSmdrdXr63FEp-mK2Igq6ahqXVVjtQkaNs/s1600/alm_build.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8ElwaEUfo6UnEE0n5BhiExbt9vOuspTdjwUb81ORPuRIVI_dPGHzfoQxza3NzuoIGyM9DLPz1m9bgM1rgOJXGeUyTRfmtaqBKRHD-h1AF_bSmdrdXr63FEp-mK2Igq6ahqXVVjtQkaNs/s640/alm_build.png" /></a></div>
As you can see we use 2 environment variables here:<ul>
<li><b>$(Build.BuildId)</b> – id of the build</li>
<li><b>$(Build.SourceVersion)</b> - The latest version control change set that is included in this build .If you are using TFS it will have the format “C1234”, so you need to remove the “C” prefix (see our build script above)</li></ul>
Then, we can use the regular MSBuild step to build the solution. All assemblies that have the <big><code><b>CommonassemblyInfo.cs</b></code></big> file linked should have the correct version number set.
Now you can add more steps to the build definition: running unit tests, publishing artifacts, etc.
<h2>Alternative approach</h2>
You can also achieve the same functionality using PowerShell instead of MSBuild. There is a good example <a href="https://msdn.microsoft.com/Library/vs/alm/Build/scripts/index">here</a>. Which one you choose is up to your personal preference – I prefer my solution, as it requires less code.Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-88944460399543615032014-11-04T08:58:00.000-08:002014-11-04T08:58:27.620-08:00SharePoint 2013 sticky footerAdding a footer to SharePoint masterpage may be a bit tricky, since SharePoint automatically recalculates height and scrolling properties of some default div containers on page load. Today I will show how to add a so called "sticky footer" to a SharePoint masterpage using Javascript. The sticky footer will be displayed always at the bottom of the page, even if there is little content. We will base on the SharePoint 2013 "Seattle" masterpage.
<h2>Masterpage structure changes</h2>
First we need to add a footer container (div) to our masterpage, that will contain the footer content. We add this at the end of the default <strong>"s4-workspace"</strong> div, right after the <strong>"s4-bodyContainer"</strong> div:
<pre class="brush: html; highlight: [5]"><div id="s4-workspace" class="ms-core-overlay">
<div id="s4-bodyContainer">
(...)
</div>
<div id="footer">Your footer content goes here</div>
</div></pre>
Now you need to populate your footer with content and set its CSS properties e.g. height.
<h2>Javascript code</h2>
Now that we have our footer container let's position it with some javascript code:
<pre class="brush: js">// generic function for resizing elements within their containers
function adjustContainerHeightToObject(container, content, offset){
var container = $(container);
var content = $(content, container);
if (container.height() > content.height()) {
content.height(container.height() + offset);
}
}
// specific function for resizing the s4-body container
function resizeMainContent(){
// as offset we provide the negative value of the height of our footer
adjustContainerHeightToObject('#s4-workspace', '#s4-bodyContainer', -50); // for footer with 50px height
}
// call resize function on page load
_spBodyOnLoadFunctionNames.push("resizeMainContent");</pre>
<h2>_spBodyOnLoadFunctionNames.push() vs. $(document).ready()</h2>
Notice that instead of using regular jQuery ready() event we are using SharePoint's custom mechanism for calling function after the page loads. This will ensure that all SharePoint resizing code has already executed when calling our function.
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com1tag:blogger.com,1999:blog-7041745156199549192.post-9694153248032458242014-08-13T02:31:00.002-07:002014-08-13T02:32:57.848-07:00SSIS Error: The requested OLE DB provider Microsoft.Jet.OLEDB.4.0 is not registered. If the 64-bit driver is not installed, run the package in 32-bit mode.I have a SSIS solution that reads from an Excel file. I recently deployed it to a different server and tried executing it from Visual Studio. I got the following error:
<strong><i><p>[Excel Source [2]] Error: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0209303. There may be error messages posted before this with more information on why the AcquireConnection method call failed.
<p>[Connection manager "Excel Connection Manager"] Error: The requested OLE DB provider Microsoft.Jet.OLEDB.4.0 is not registered. If the 64-bit driver is not installed, run the package in 32-bit mode. Error code: 0x00000000.
An OLE DB record is available. Source: "Microsoft OLE DB Service Components" Hresult: 0x80040154 Description: "Class not registered".</i></strong>
<h2>Solution</h2>
As the error message suggests you need to run the package in the 32-bit mode. To do that:<ol>
<li>Right-click your solution</li>
<li>Click 'Properties'</li>
<li>Select 'Debugging' node</li>
<li>Set '<strong>Run64BitRuntime</strong>' property to False</li>
<li>Save and re-run your solution</li></ol>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj__7jnR2g3Ll3kNQrOKQcWEWloUKilErTX7zFF_R1VBLTgYxICqvBK-K6pIAGU1WpnCONRUu3NLPbf0Olf_eI5x2r4CSyhxUhd6o7Z3r47sQYaiDzsb212_JujVDpx6kGX85M7cIVdv3g/s1600/ssis32.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj__7jnR2g3Ll3kNQrOKQcWEWloUKilErTX7zFF_R1VBLTgYxICqvBK-K6pIAGU1WpnCONRUu3NLPbf0Olf_eI5x2r4CSyhxUhd6o7Z3r47sQYaiDzsb212_JujVDpx6kGX85M7cIVdv3g/s320/ssis32.png" /></a></div>
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com3tag:blogger.com,1999:blog-7041745156199549192.post-33315204308138716512014-04-22T07:59:00.002-07:002014-04-22T08:15:32.189-07:00SSIS integration with Dynamics CRM using ExecuteMultipleRequest for bulk operationsThere are several tutorials on the Web explaining how to integrate SSIS with Dynamics CRM using the script component. All of them however show you only the basic setup, where records from a data source are processed 1 by 1 when executing CRM commands (e.g. creating CRM records). In this post I would like to show you have to leverage the ExecuteMultipleRequest class from CRM SDK to create bulk operations for records from the SSIS data source.
<h2>Tutorial scenario</h2>
<ol>
<li>At first we will create a simple database with 1 table that stores user names</li>
<li>Then we will create an SSIS project</li>
<li>Next, we will add our db table as data source, so SSIS can read information about users</li>
<li>Then, we will add a script component that creates contacts in CRM for each user from the table</li>
<li>Finally, we will modify the script to import CRM contacts in batches</li>
<li>At the end we will compare execution time of both scripts</li>
</ol>
<h2>Basic setup</h2>
<u><b>Database</b></u><br>
Let's create a basic db table with only 2 columns:
<pre class="brush: sql">CREATE TABLE Users (
FirstName VARCHAR(100) NOT NULL,
LastName VARCHAR(100) NOT NULL
)</pre>
Now populate your table with some dummy data, in my case I've added 1000 records.
<br><br><u><b>SSIS project</b></u>
<ol>
<li>Open "Sql Server Data Tools" (based on Visual Studio 2010)</li>
<li>Got to File -> New -> Project...</li>
<li> Select "Integration Services Project", provide project name and click OK</li>
<li>When the project is created add a Data Flow task to your main package:
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYska-Zv0R1N6bfNLjE-mED4Fgft21KL-TB4IonDl7jqls8v3mYUf68RTafhidl8LxTTmsYJoKfGoE7Y_1U_bmH7RaORJKKP4f6EseLw5iU2Nu-M-E_6rK-tc1yxiQYjIkvK_NqvksOSo/s1600/SSISproj.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYska-Zv0R1N6bfNLjE-mED4Fgft21KL-TB4IonDl7jqls8v3mYUf68RTafhidl8LxTTmsYJoKfGoE7Y_1U_bmH7RaORJKKP4f6EseLw5iU2Nu-M-E_6rK-tc1yxiQYjIkvK_NqvksOSo/s320/SSISproj.png" /></a></div>
</li></ol>
<u><b>Data Source</b></u>
<ol>
<li>Double click your Dat Flow task to open it</li>
<li>Double click "Source Assitance" from the toolbox</li>
<li>On the first screen of the wizard select "SQL Server" as source type and select "New..."</li>
<li>On second screen provide you SQL server name and authentication details and select your database</li>
<li>A new block will be added to you Data Flow, representing your DB table. It has an error icon on, cause we haven't selected the table yet. Also, you will see a new connection manager representing you DB connection:<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimpB5SI5J6wEyj6cqdC8jL5r6AmMjSnyh3VK8X0OOJJNUlARqqcUZY9zWAe-ibS7nmQbP-XbTEHUZrd4LbsSJuQ1Mw8DCsXd-4I86GIAlCLZKFRxZ2yLqHaluHPaTUrGtEwZdIWJJlBNo/s1600/SSISsource.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimpB5SI5J6wEyj6cqdC8jL5r6AmMjSnyh3VK8X0OOJJNUlARqqcUZY9zWAe-ibS7nmQbP-XbTEHUZrd4LbsSJuQ1Mw8DCsXd-4I86GIAlCLZKFRxZ2yLqHaluHPaTUrGtEwZdIWJJlBNo/s320/SSISsource.png" /></a></div>
</li>
<li>Double click the new block, from the dropdown select the Contacts table we created and hit OK. The error icon should disappear</li></ol>
<u><b>Script component</b></u>
<ol>
<li>Drag and drop the Script Component from the toolbox to you Data Flow area</li>
<li>Create a connection (arrow) from your data source to your script:
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLmaS_VNjayPDEdmPjKyE2rpCY1sg4MBz6d4aLztKbUjnWnDbM3ss1EK9NaooO_RzuxyYQL9jEfK2cNoJ2AHO1dBDdzdOxMJ8GqS94oZgRPN_La508bJisf2PtegESNJo5ccMaBw-Kkwo/s1600/SISscript.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLmaS_VNjayPDEdmPjKyE2rpCY1sg4MBz6d4aLztKbUjnWnDbM3ss1EK9NaooO_RzuxyYQL9jEfK2cNoJ2AHO1dBDdzdOxMJ8GqS94oZgRPN_La508bJisf2PtegESNJo5ccMaBw-Kkwo/s320/SISscript.png" /></a></div></li>
<li>Double click your script componet to open it</li>
<li>Go to "Input Columns" tab and select all columns</li>
<li>Go to "Inputs and Outputs" tab and rename "Input 0" to "ContactInput"</li>
</ol>
<h2>1-by-1 import</h2>
Now that we have basic components setup let's write some code! In this step we will create a basic code for importing Contacts into CRM. I'm assuming you have basic knowledge of CRM SDK, therefore the CRM specific code will not be explained in details.
<p>Open the script component created in the previous steps and click "Edit Script...". A new instance of Visual Studio will open with a new, auto-generated script project. By default the main.cs file will be opened - this is the only file you need to modify. However, before modyfing the code you need to add references to following libraries:<ul><li>Microsoft.Sdk.Crm.Proxy</li>
<li>Microsoft.Xrm.Client</li>
<li>Microsoft.Xrm.Sdk</li>
<li>Microsoft.Runtime.Serialization</li></ul>
Now we are ready to write the code. Let's start by creating a connection to you CRM organization. This will be created in the existing PreExecute() method like this:<pre class="brush: csharp">OrganizationService _service;
public override void PreExecute()
{
base.PreExecute();
var crmConnection = CrmConnection.Parse(@"Url=https://******.crm4.dynamics.com; Username=******.onmicrosoft.com; Password=*********;");
_service = new OrganizationService(crmConnection);
}</pre>
Now that we have the connection created let's write code, that actually imports our contacts to CRM. This can be done be modyfing the existing method ContactInput_ProcessInputRow:
<pre class="brush: csharp">public override void ContactInput_ProcessInputRow(ContactInputBuffer Row)
{
var contact = new Entity("contact");
contact["firstname"] = Row.FirstName;
contact["lastname"] = Row.LastName;
_service.Create(contact);
}</pre>
Obviously the code above requires some null-checks, error handling etc but in general that's all you need to do in order to import your contacts into CRM. If you close the VS instance with the script project it will be automatically saved and built.
<p>You can now hit F5 in the original VS window to perform the actual migration.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiILh6EEU3O2mZwoq_kg6LIEtiQOqHlbJ_368j5nflT5Hb8yU6ZyvA5hskOyyOVuF0I5-_Rt33VLfDnR1C58MStmYaZlu3KvuhNm-D-cX-ZiUMnUKQzRjEu2AfadqdPWYa6Zr28xchIyA4/s1600/SISresult.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiILh6EEU3O2mZwoq_kg6LIEtiQOqHlbJ_368j5nflT5Hb8yU6ZyvA5hskOyyOVuF0I5-_Rt33VLfDnR1C58MStmYaZlu3KvuhNm-D-cX-ZiUMnUKQzRjEu2AfadqdPWYa6Zr28xchIyA4/s320/SISresult.png" /></a></div>
<h2>Bulk import</h2>
In the basic setup described above there is 1 CRM call for each record passed to the script component. Calling web services over the network may be a very time consuming operation. CRM team is aware of that and that is why <a href="http://msdn.microsoft.com/en-us/library/jj863631.aspx" title="Use ExecuteMultiple to Improve Performance for Bulk Data Load" target="_blank">they introduced the ExecuteMultipleRequest class</a>, which basically allows you to create a set of CRM requests on the client side and send them all at once in a single web service call. In response you will receive an instance of the RetrieveMultipleResponse class, allowing you to process response for each single request.
<p>Let's modify the script code to leverage the power of the ExecuteMultipleRequest class. To do that overwrite the ContactInput_ProcessInput method. The default method implementation can be found in the ComponentWrapper.cs file and it as simple as this:
<pre class="brush: csharp"> public virtual void ContactInput_ProcessInput(ContactInputBuffer Buffer)
{
while (Buffer.NextRow())
{
ContactInput_ProcessInputRow(Buffer);
}
}</pre>
As you can see by default it calls the ContactInput_ProcessInputRow method that we implemented in the previous step for each record from the source. We need to modify it, so it creates a batch of CRM requests and then send it to CRM at once:
<pre class="brush: csharp">List<Entity> _contacts = new List<Entity>();
public override void ContactInput_ProcessInput(ContactInputBuffer Buffer)
{
int index = 0;
while (Buffer.NextRow())
{
_contacts.Add(GetContactFromBuffer(Buffer));
index++;
// Let's use buffer size 500. CRM allows up to 1000 requests per single call
if (index == 500)
{
ImportBatch();
index = 0;
}
}
ImportBatch();
}
private void ImportBatch()
{
if (_contacts.Count > 0)
{
// Create and configure multiple requests operation
var multipleRequest = new ExecuteMultipleRequest()
{
Settings = new ExecuteMultipleSettings()
{
ContinueOnError = true, // Continue, if processing of a single request fails
ReturnResponses = true // Return responses so you can get processing results
},
Requests = new OrganizationRequestCollection()
};
// Build a CreateRequest for each record
foreach (var contact in _contacts)
{
CreateRequest reqCreate = new CreateRequest();
reqCreate.Target = contact;
reqCreate.Parameters.Add("SuppressDuplicateDetection", false); // Enable duplicate detection
multipleRequest.Requests.Add(reqCreate);
}
ExecuteMultipleResponse multipleResponses = (ExecuteMultipleResponse)_service.Execute(multipleRequest);
// TODO: process responses for each record if required e.g. to save record id
_contacts.Clear();
}
}
private Entity GetContactFromBuffer(ContactInputBuffer Row)
{
Entity contact = new Entity("contact");
contact["firstname"] = Row.FirstName;
contact["lastname"] = Row.LastName;
return contact;
}</pre>
<h2>Execution time comparison</h2>
As you can see the code for sending requests in batches is a bit longer (but still quite simple I believe) so you may be tempted to go with the simpler version. If you don't care about performance too much (little data, no time limitations) then it might be the way to go for you. However, it's always better to know your options and take a conscious decision. SSIS packages usually process large amount of data, which often takes a lot of time. If you add additional step performing CRM operations via CRM SDK (i.e. via CRM web services) you may be sure this will affect significantly the execution time.
<p>I've measured the execution time for both methods. Importing 1000 contacts into CRM took:
<ul>
<li><b>1-by-1 - 2:22s</b></li>
<li><b>Bulk import - 0:44s</b></li></ul>
In my simple scenario bulk import was 3x faster than 1-by-1. The more data you send to CRM the bigger the difference may be.Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com7tag:blogger.com,1999:blog-7041745156199549192.post-5408873717064588362014-04-17T11:29:00.001-07:002014-04-18T07:19:12.947-07:00C#: Retrieve user data from Active DirectoryThe code snippet below shows how to retrieve user information from ActiveDirectory using the PrincipalSearcher class:
<pre class="brush: csharp">var context = new PrincipalContext(ContextType.Domain, "yourdomain.com");
var user = new UserPrincipal(context);
// search by alias
user.SamAccountName = "useralias";
// You can also search by other properties e.g. Display Name
//user.DisplayName = "John Doe";
// perform the search
var search = new PrincipalSearcher(user);
user = (UserPrincipal)search.FindOne();
search.Dispose();
if (user != null) {
Console.WriteLine(user.DistinguishedName);
} else {
Console.WriteLine("No user found");
}
</pre>
<h2>Searching across multiple domains</h2>
The code above will search for users in the specified domain only. However, you will often want to search across multiple domains. In that case you will need to provide the parent domain name together with appropriate port. Let's say you have a hierarchy like this:
<pre class="brush: bash">corp.xxx.com
- domain1.corp.xxx.com
- domain2.corp.xxx.com
- ...</pre>
To search across all children of the corp.xxx.com domain construct your PrincipalContext like this:
<pre class="brush: csharp">var context = new PrincipalContext(
ContextType.Domain,
"corp.xxx.com:3268",
"DC=corp,DC=xxx,DC=com");</pre>Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-5739551037441842702013-12-24T05:29:00.000-08:002013-12-24T05:29:32.811-08:00WPF DataGrid - Custom template for generic columnsRecently I had to bind a WPF DataGrid to a System.Data.DataSet. This is quite straightforward and there are many tutorials on how to achieve this.
<p>By default all table columns are auto-generated using 4 predefined templates (Text, Hyperlink, CheckBox, and ComboBox) that support read-only and edit modes. If you wish to customize the way some columns are rendered you can also define a custom template and assign it to some columns by hooking into the <code>AutoGeneratingColumns</code> event of the DataGrid as described <a href="http://msdn.microsoft.com/en-us/library/cc903950(v=vs.95).aspx" title="How to: Customize Auto-Generated Columns in the DataGrid Control" target="_blank">here</a>.
<h2>Problem with generic columns</h2>
As you can see creating custom templates for columns is pretty straightforward as long as column names are fixed. If your WPF app uses a table that doesn't change dynamically you are all good. The problem starts when you use your datagrid to display tables, whose columns` names change e.g. tables loaded from a file at runtime. This is because you can't use the column name in your custom template.
<h2>Solution 1 - Create template programmatically</h2>
In this solution you build your custom template in code and assign it to the chosen column at runtime, in the <code>AutoGeneratingColoumn</code> event handler.
<pre class="brush: csharp">private void DataGrid_AutoGeneratingColumn(object sender, DataGridAutoGeneratingColumnEventArgs e)
{
// First get the corresponding DataColumn
var colName = e.PropertyName;
var table this.DataContext as DataTable;
var tableColumn = table.Columns[colName];
// choose columns to customize e.g. by type
if (YOUR CONDITION E.G. COLUMN TYPE)
{
var templateColumn = new DataGridTemplateColumn();
templateColumn.Header = colName;
templateColumn.CellTemplate = this.BuildCustomCellTemplate(colName);
templateColumn.SortMemberPath = colName;
e.Column = templateColumn;
}
}
// builds custom template
private DataTemplate BuildCustomCellTemplate(string columnName)
{
var template = new DataTemplate();
var button = new FrameworkElementFactory(typeof (Button));
template.VisualTree = button;
var binding = new Binding();
binding.Path = new PropertyPath(columnName);
button.SetValue(ContentProperty, binding);
return template;
}
</pre>
The code above would create the following template for selected columns:
<pre class="brush: xml"><DataTemplate>
<Button Content="{Binding Path=<COLUMN_NAME>}" ></Button>
</DataTemplate></pre>
Obviously this is just an example - in real life you would need more than just a button that does nothing. In your code you can define full templates, use binding converters, assign commands etc. However, the code gets pretty complex. Therefore this solution is suitable for simple templates.
<h2>Solution 2 - Create template skeleton</h2>
Alternatively, you can create the template skeleton in XAML and replace all bindings in your event handler: <pre class="brush: xml"><!-- our custom template defined in XAML: -->
<DataTemplate x:Key="customCellTemplate">
<Button Content="{Binding}" ></Button>
</DataTemplate></pre>
And the event handler:<pre class="brush: csharp">private void DataGrid_AutoGeneratingColumn(object sender, DataGridAutoGeneratingColumnEventArgs e)
{
(...)
if (YOUR CONDITION E.G. COLUMN TYPE)
{
// Create wrapping template in code and populate all bindings accordingly
string xaml = @"<DataTemplate xmlns=""http://schemas.microsoft.com/winfx/2006/xaml/presentation""><ContentControl Content=""{0}"" ContentTemplate=""{1}"" /></DataTemplate>";
var template = (DataTemplate)XamlReader.Load(string.Format(xaml, "{Binding " + colName + "}", "{StaticResource customCellTemplate}"));
templateColumn.CellTemplate = template;
(...)
}
}</pre>The advantage of this approach is that you can create more complex templates in XAML and in your event handler code only populate all required bindings. The limitation of this method is that the custom template needs to be defined at the application level. I found this solution <a href="http://stackoverflow.com/questions/20657755/custom-column-template-for-datagrid/20670485?noredirect=1#20670485" target="_blank">here</a>.
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-10902190384000811472013-12-02T09:46:00.001-08:002013-12-02T09:56:25.319-08:00Claims based authorization in MVC4Recently I worked on a sample MVC4 application that was using Claims based authentication. I used the <a href="http://visualstudiogallery.msdn.microsoft.com/e21bf653-dfe1-4d81-b3d3-795cb104066e" target="_blank">Identity and Access Visual Studio extension</a> to help me configuring Windows Identity Foundation (WIF) in my app. In short, the tool updates your web.config by adding sections system.identityModel and system.identityModel.services to enable WIF. In result, my application is redirecting all unauthenticated users to my Identity Provider, which then generates a security token that is returned back to my app.
<p>Once I had the authentication part done I started working on the authorization. I wanted it to be role-based i.e. very similar to what you use by default in the default MVC model:
<pre class="brush: csharp">[Authorize(Roles = "Administrator")]
public class AdminController : Controller
{
// Controller code here
}
</pre>
In theory, if your Identity Provider issues a token containing the Identity Role claim (http://schemas.microsoft.com/ws/2008/06/identity/claims/role) with the value of user's current role the above default authorization code should work. And it actually does! This is because some basic claims from the token are automatically used to populate the user's identity object, including roles. So when your app's authorization code checks user's role it will use values provided in the token (if any were provided).
<h2>Membership database issue</h2>
The above solution worked fine for me at the beginning. What I was not aware of is the fact that, by default, the <code>Authorize</code> attribute also connects to you Membership database, regardless the token content. By default as membership database MVC uses the local ASPNETDB.mdf file. I realized that when I moved the application to a different server, without moving the mdf file. Suddenly I started getting the following SQL exception when calling the <code>Authorize</code> attribute:
<blockquote>A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) </blockquote>
I guess there is an easy way to configure ASP not to connect to the database if roles are provided in the token. However, I decide to take a different approach to have more control over the code.
<h2>Custom authorization attribute</h2>
I decided to write a custom Authorization attribute, that would search for user's role directly in claims provided in the token:
<pre class="brush: csharp">public class ClaimsAuthorizeAttribute : AuthorizeAttribute
{
private string claimType;
private string claimValue;
public ClaimsAuthorizeAttribute(string type, string value)
{
this.claimType = type;
this.claimValue = value;
}
public override void OnAuthorization(AuthorizationContext filterContext)
{
var identity = (ClaimsIdentity)Thread.CurrentPrincipal.Identity;
var claim = identity.Claims.FirstOrDefault(c => c.Type == claimType && c.Value == claimValue);
if (claim != null)
{
base.OnAuthorization(filterContext);
}
else
{
base.HandleUnauthorizedRequest(filterContext);
}
}
}
</pre>
This approach is more flexible as it allows me to use different types of claims for authorization in future, not only role. The usage of the attribute is still very simple:
<pre class="brush: csharp">[ClaimsAuthorize(ClaimTypes.Role, "Administrator")]
public class AdminController : Controller
{
// Controller code here
}
</pre>
<h2>Additional notes</h2>
When using claims based authorization it is often advised to use the existing ClaimsPrincipalPermission attribute together with the configured ClaimsAuthorizationManager. In my case this seemed like an overkill, especially that I wanted to keep the code similar to the default authorization model.Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com1tag:blogger.com,1999:blog-7041745156199549192.post-4600022641783608722013-09-14T07:58:00.000-07:002013-09-14T08:00:45.683-07:00OMPM - SQL-DMO Install RequiredWhen trying to run the <a href="http://technet.microsoft.com/en-us/library/cc179179.aspx" target="_blank" title="Office Migration Planning Manager for Office 2010 overview">Office Migration Planning Manager for Office 2010</a> on a machine that has Sql Server 2008 or later installed you will often get the following error message:
<p><strong><i>SQL-DMO Install Required: This operation requires the SQL Server ODBC Driver, version 3.80 or later, which comes with SQL Server 2000, SQL Server 2005 and SQL Server Express.</i></strong>
<p>This is caused by a missing dependency - SQL-DMO has been deprecated and is no longer part of the Sql Server. When you search the Web looking for solution you will be advised to install the "Backward Compatibility Components" which are part of the "Feature Pack for Microsoft SQL Server 2005". So I did. However, this caused another error:
<p><strong><i>Runtime Error!<br>
(...)<br>
R6034<br>
An Application has made an attempt to load the C runtime library incorrectly.<br>
Please contact the application's support Sql server team for more information.</i></strong>
<p>Investigating this took me some time. It came out that when you search for "Feature Pack for Microsoft SQL Server 2005" in the <a href="http://www.microsoft.com/en-us/download/default.aspx" target="_blank" title="Microsoft Download Center Homepage">Microsoft Download Center</a> the first result you get is actually outdated. There is another download link at the bottom of the search results list, that points to the latest version:
<p><strong>Correct version: <a href="http://www.microsoft.com/en-us/download/details.aspx?id=20101" target="_blank" title="Feature Pack for Microsoft SQL Server 2005 SP4 download">Feature Pack for Microsoft SQL Server 2005 SP4</a>.</strong>
<p>Installing this one fixed all issues and allowed me to run the OMPM. I had a 64x version of Sql Server 2008 R2.
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com1tag:blogger.com,1999:blog-7041745156199549192.post-37759594958980644752013-05-31T12:01:00.000-07:002013-05-31T14:54:25.090-07:00How to read and write Excel cells with OpenXML and C#Recently I needed to update an Excel spreadsheet and then retrieve some recalculated values. The tricky part was that some cells that I needed to retrieve information from were formula cells (e.g. =A1+5). I needed to update some other cells first and then get the value of recalculated formula.
<p>Unfortunately, this is not directly possible with OpenXML. If you simply try updating some cells and then retrieving the relying ones, you will get the original values for those cells, not recalculated. . This is because formula cells don't store any values, just... formulas. You can only force recalculation by opening your document in the Excel application.
<p>Knowing this I implemented a Refresh method, that opens the Excel app in background and then closes it immediately and saves changes. Below I present my sample code.
<p><strong>Prerequisites</strong><br>
In order to compile the following code you will need the <a href="http://www.microsoft.com/en-us/download/details.aspx?id=5124" title="Open XML SDK 2.0 for Microsoft Office" target="_blank">Microsoft.OpenXML SDK 2.0</a> (DocumentFormat.OpenXML NuGet package) and reference to Microsoft.Office.Interop.Excel (used for opening the Excel app to recalculate formulas).
<p><strong>Solution</strong><br>
Let's start with an interface for my ExcelDocument class:<pre class="brush: csharp">/// <summary>
/// Interface defining Excel Document methods
/// </summary>
public interface IExcelDocument
{
/// <summary>
/// Reads a value of a spreadsheet cell
/// </summary>
/// <param name="sheetName">Name of the spreadsheet</param>
/// <param name="cellCoordinates">Cell coordinates e.g. A1</param>
/// <returns>Value of the specified cell</returns>
CellValue ReadCell(string sheetName, string cellCoordinates);
/// <summary>
/// Updates a value of a spreadsheet cell
/// </summary>
/// <param name="sheetName">Name of the spreadsheet</param>
/// <param name="cellCoordinates">Cell coordinates e.g. A1</param>
/// <param name="cellValue">New cell value</param>
void UpdateCell(string sheetName, string cellCoordinates, object cellValue);
/// <summary>
/// Refreshes the workbook to recalculate all formula cell values
/// </summary>
void Refresh();
}</pre>
Once we have the interface we need its implementation:<pre class="brush: csharp">public class ExcelDocument : IExcelDocument
{
private readonly string _filePath;
public ExcelDocument(string filePath)
{
_filePath = filePath;
}
/// <see cref="IExcelDocument.ReadCell" />
public CellValue ReadCell(string sheetName, string cellCoordinates)
{
using (SpreadsheetDocument excelDoc = SpreadsheetDocument.Open(_filePath, false))
{
Cell cell = GetCell(excelDoc, sheetName, cellCoordinates);
return cell.CellValue;
}
}
/// <see cref="IExcelDocument.UpdateCell" />
public void UpdateCell(string sheetName, string cellCoordinates, object cellValue)
{
using (SpreadsheetDocument excelDoc = SpreadsheetDocument.Open(_filePath, true))
{
// tell Excel to recalculate formulas next time it opens the doc
excelDoc.WorkbookPart.Workbook.CalculationProperties.ForceFullCalculation = true;
excelDoc.WorkbookPart.Workbook.CalculationProperties.FullCalculationOnLoad = true;
WorksheetPart worksheetPart = GetWorksheetPart(excelDoc, sheetName);
Cell cell = GetCell(worksheetPart, cellCoordinates);
cell.CellValue = new CellValue(cellValue.ToString());
worksheetPart.Worksheet.Save();
}
}
/// <summary>Refreshes an Excel document by opening it and closing in background by the Excep Application</summary>
/// <see cref="IExcelDocument.Refresh" />
public void Refresh()
{
var excelApp = new Application();
Workbook workbook = excelApp.Workbooks.Open(Path.GetFullPath(_filePath));
workbook.Close(true);
excelApp.Quit();
}
private WorksheetPart GetWorksheetPart(SpreadsheetDocument excelDoc, string sheetName)
{
Sheet sheet = excelDoc.WorkbookPart.Workbook.Descendants<Sheet>().SingleOrDefault(s => s.Name == sheetName);
if (sheet == null)
{
throw new ArgumentException(
String.Format("No sheet named {0} found in spreadsheet {1}", sheetName, _filePath), "sheetName");
}
return (WorksheetPart) excelDoc.WorkbookPart.GetPartById(sheet.Id);
}
private Cell GetCell(SpreadsheetDocument excelDoc, string sheetName, string cellCoordinates)
{
WorksheetPart worksheetPart = GetWorksheetPart(excelDoc, sheetName);
return GetCell(worksheetPart, cellCoordinates);
}
private Cell GetCell(WorksheetPart worksheetPart, string cellCoordinates)
{
int rowIndex = int.Parse(cellCoordinates.Substring(1));
Row row = GetRow(worksheetPart, rowIndex);
Cell cell = row.Elements<Cell>().FirstOrDefault(c => cellCoordinates.Equals(c.CellReference.Value));
if (cell == null)
{
throw new ArgumentException(String.Format("Cell {0} not found in spreadsheet", cellCoordinates));
}
return cell;
}
private Row GetRow(WorksheetPart worksheetPart, int rowIndex)
{
Row row = worksheetPart.Worksheet.GetFirstChild<SheetData>().
Elements<Row>().FirstOrDefault(r => r.RowIndex == rowIndex);
if (row == null)
{
throw new ArgumentException(String.Format("No row with index {0} found in spreadsheet", rowIndex));
}
return row;
}
}</pre>
I hope the code is self-explanatory and doesn't require more comments. You can optimize it for you needs e.g. when updating/reading multiple cells at once you may want to open the doc only once. Currently my code opens it and closes for each read/update request.
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com2tag:blogger.com,1999:blog-7041745156199549192.post-82646567118983688262013-05-22T14:19:00.002-07:002013-05-22T14:25:11.214-07:00Preparation materials for MCSD Web Applications certificationI'm happy to announce that today I've become a Microsoft Certified Solutions Developer (MCSD) in Web Applications area. To achieve that I needed to pass 3 exams. Here is my short summary of each of them together with some useful preparation materials:
<h2>70-480 - Programming in HTML5 with JavaScript and CSS3</h2>
This exam begins your journey to the MCSD certificate. In general it covers exactly what is named in its title: HTML5 & CSS3. You could also use some jQuery knowledge. If you are a web developer with multiple years of experience this should be a piece of cake for you. In case you need to refresh your memory on some topics I recommend watching the free video tutorial at the Microsoft Virtual Academy: <ul><li><a href="http://www.microsoftvirtualacademy.com/training-courses/learn-html5-with-javascript-css3-jumpstart-training" target="_blank" title="Developing in HTML5 with JavaScript and CSS3 Jump Start">Developing in HTML5 with JavaScript and CSS3 Jump Start</a></li></ul>
<br>
<h2>70-486 - Developing ASP.NET MVC 4 Web Applications</h2>
This exam tests your knowledge of the Asp.Net MVC4 framework. To be honest I can't really remember if it includes WebApi questions, but it's worth to learn it anyway, as it's required for the last exam. Again, if you worked on several MVC4 projects there is nothing to be afraid of.
<p>Before I took this exam I browsed the following book to make sure I'm not missing anything:<br>
<center><img border="0" src="http://ecx.images-amazon.com/images/I/518irXv0xNL._SL500_AA300_.jpg" /></center>
<br>
<h2>70-487 - Developing Windows Azure and Web Services</h2>
For me this was the hardest exam. It is because it covers a wide range of different topics. All required technologies are somehow related, but at the same time they are independent frameworks:<ul>
<li>Windows Azure</li>
<li>WCF</li>
<li>MVC4 WebApi</li>
<li>Entity Framework</li>
<li>Other Data Access</li>
</ul>
Despite the exam`s title I was under impression that there were not that many questions related to Windows Azure. The basic overview of Azure features would suffice to answer most of them. There was a lot of questions related to WCF & Data Access though. Luckily there are excellent study guides available. Here are 2 that I liked most:<ul>
<li><a href="http://alertcoding.wordpress.com/2013/01/09/microsoft-exam-70-487-study-guide/" target="_blank" title="Microsoft exam 70-487 study guide">Study Guide #1</a>
</br>My personal favourite; it relays strongly on pluralsight video trainings, which are usually very good.</li>
<li><a href="http://www.bloggedbychris.com/2013/01/09/microsoft-exam-70-487-study-guide/" target="_blank" title="Microsoft exam 70-487 study guide">Study Guide #2</a>
<br>A nice alternative for those of you who don't have access to pluralsight. Most links are referencing free online materials.</li></ul>
If you read/watch all linked materials you will be good to go ;) Good luck future MCSDs!Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com2tag:blogger.com,1999:blog-7041745156199549192.post-43241255005023682592013-05-15T13:58:00.000-07:002013-05-15T13:58:56.518-07:00XAML ListView - scroll SelectedItem to the middleIn the <a href="http://apps.microsoft.com/windows/pl-PL/app/skm-trojmiasto-rozk-ad/bb7e686f-3322-4b04-bc71-accc58398492" title="SKM Trójmiasto Timetable for Windows 8" target="_blank">Windows Store app that I recently worked on</a> I used several ListViews to present some data. I had 2 requirements for those lists:<ul>
<li>The selected item should be always visible</li>
<li>In addition it should be displayed in the middle of the list when possible (it's not possible for the first element).</li></ul>
This is what I wanted to achieve:<br> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-HJZ54sah0LRdpv8IZSydgm9dceGA5oblnZ7aw37rCaDiF_p55FDagmQVe4GEr3fUJ76qqvqgRZIgbp_FXpXJYCBj8rpTIdlWLeN6SrrgjdPuenhhSj5NENvdNK47tN4iHu43qGLXsAc/s1600/ListView.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-HJZ54sah0LRdpv8IZSydgm9dceGA5oblnZ7aw37rCaDiF_p55FDagmQVe4GEr3fUJ76qqvqgRZIgbp_FXpXJYCBj8rpTIdlWLeN6SrrgjdPuenhhSj5NENvdNK47tN4iHu43qGLXsAc/s320/ListView.png" /></a>
<p>The first requirement is quite easy to fulfill by using <nobr><code>ListView.ScrollIntoView()</code></nobr> method. However, this doesn't satisfy the second requirement, as you don't have any control on where exactly the item will appear.
<h2>Solution</h2>
Instead of working with the ListView directly I worked with the included ScollView control. Here is my method for scrolling with some comments:<pre class="brush: csharp">public void ScrollToSelected(ListView list, Object selected)
{
var scrollViewer = list.GetFirstDescendantOfType<ScrollViewer>();
if (scrollViewer == null) return;
// Calculate the offset to be used for scrolling.
// In my case I use ViewPort height, index of the selected item and a fixed value 3 to adjust the result
double halfList = scrollViewer.ViewportHeight/2;
int itemIndex = list.Items.IndexOf(selected);
double scrollOffset = itemIndex - halfList + 3;
// If offset happens to be bigger than scrollable height use the scrollable height
// Possible for items from the end of the list
if (scrollOffset > scrollViewer.ScrollableHeight)
{
scrollOffset = scrollViewer.ScrollableHeight;
}
// scroll to calculated offset
scrollViewer.ScrollToVerticalOffset(scrollOffset);
}</pre>Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-67893739132441567302013-05-14T07:10:00.000-07:002013-05-14T07:10:10.889-07:00Windows Store - Your app doesn’t meet requirement 4.1Recently I uploaded my first Windows 8 app to the Windows Store. At first, my app didn't pass the certification process because of the following issue (Notes from Testers):
<blockquote class="tr_bq">
The app has declared access to network capabilities and no privacy statement was provided in the Description page. The app has declared access to network capabilities and no privacy statement was provided in the Windows Settings Charm.</blockquote>
<p>Luckily for me the issue is widely described <a href="http://blogs.msdn.com/b/jennifer/archive/2012/11/15/common-windows-store-certification-errors-4-1-your-app-must-comply-with-privacy-requirements.aspx" target="_blank">here</a>. In short, if your app uses internet you need to define a privacy policy. the privacy policy needs to be linked from the app description page and also avaiable from the Settings Charm.
<h2>Solution</h2>
This is what you need to do to satisfy this requirement:
<ol>
<li>Create a webpage describing your privacy policy (how you use user data etc.).</li>
<li>Deploy that webpage to any server, so it's available to everybody who wants to read it.</li>
<li>Add a link to your privacy page on the app`s description page in the Windows Store Developer Center:
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhctxwBUw027c9JyeSsZWsrXYC3Q3u25Qx7tAHoiN5xtyjVBQx9nZZO7qsYzHbmpinlDWf9icyt5vs8HL6-aMtpf4eQFVh3lGjck74Fuu_r0ArDLoslKVfPcPU-CM4iI1IpsRr2I0AdslE/s1600/privacyPolicy.png" />
</li>
<li>Add a link to the privacy policy page to your Settings Charm.<br>
Here is how I do it (basing on <a href="http://jimiz.net/2012/12/privacy-policy-windows-store-apps/#axzz2TCfDVZMz" target="_blank">this post</a>):
<ol>
<li>Open the App.xaml.cs file</li>
<li>Reference the following namespace:<pre class="brush: csharp">using Windows.UI.ApplicationSettings;</pre></li>
<li>Create following methods for adding the new action called 'Privacy Policy" to your Settings pane:
<pre class="brush: csharp">private void AddPrivacyPolicy(SettingsPane sender, SettingsPaneCommandsRequestedEventArgs args)
{
var privacyPolicyCommand =
new SettingsCommand("privacyPolicy", "Privacy Policy", (uiCommand) => ShowPolicyPage());
args.Request.ApplicationCommands.Add(privacyPolicyCommand);
}
private async void ShowPolicyPage()
{
var uri = new Uri("http://YOUR-SERVER/privacy-policy.html")
await Launcher.LaunchUriAsync(uri);
}</pre></li>
<li>In the OnLaunched method register the newly created method:
<pre class="brush: csharp">SettingsPane.GetForCurrentView().CommandsRequested += AddPrivacyPolicy;</pre></li>
</ol>
</li>
<li>Rebuild your package and resubmit to the Windows Store. At this time that issue should be resolved.</li>
</ol>
When adding a Privacy Policy item to the Settings Charm you may consider using the Callisto project and its <a href="https://github.com/timheuer/callisto/wiki/SettingsFlyout" target="_blank" title="Callisto SettingsFlyout">Settings Flyout support</a>. It makes complex customization of that Settings Pane easier. In my case however it was shorter to add that single command in a regular way.
<p>BTW I was really surprized how quick the certification process was - it took only 8h from the moment of submission! It's a huge improvment comparing to my experience with WP7 apps. I hope it stays that way. Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-7537150391655450072013-04-25T07:54:00.000-07:002013-05-15T03:36:13.200-07:00XAML - SelectedItem in ComboBox doesn't workI'm currently working on a small Windows Store app. The app uses mainly default controls, with ComboBox being one of them. Recently I had a problem with selecting a default value in my ComboBox.
<p>My XAML code looked like this:<pre class="brush: xml"><ComboBox x:Name="ComboBox"
SelectedItem="{Binding DefaultItem}"
ItemsSource="{Binding Items, Mode=OneTime}"
Style="{StaticResource CustomComboxStyle}"
... >
</ComboBox>
</pre>In the design view this actually showed the correct selected item, however it didn't work when the app was ran.
<h2>Solution</h2>
The solution was actually very trivial. If you take a look at my code you'll see that the <code>SelectedItem</code> property is defined before the <code>ItemSource</code> property. It came out that the order does matter, so the <code>SelectedItem</code> should be defined after the <code>ItemSource</code>.
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-14945894429438278072012-10-24T14:58:00.000-07:002012-10-24T14:58:01.969-07:00Using task based async WCF client methods in synchronous scenariosIf you are already working with the Visual Studio 2012 you may have noticed that the dialog box for generating Service References has changed a bit. It offers an additional option to generate task based asynchronous client methods (see picture below). In VS2010 you could generate async operations for the service client as well, but in VS2012 you have the additional choice:<ul>
<li><b>Generate asynchronous operations</b><br>This one gives you the exact same result as choosing to generate async operations in VS2010 i.e. for each service method you will have 2 additional client methods: *Async & *Completed. <a href="http://fczaja.blogspot.com/2011/06/asynchronous-calls-to-wcf-service-from.html" title="Asynchronous calls to WCF service from Asp.Net">One of my previous posts</a> partially explains how to use them and provide more links on that topic.</li>
<li><b>Generate task-based operations</b><br>When writing your async code this option allows you to take advantage of the <a href="http://msdn.microsoft.com/en-us/library/dd460717.aspx" title="Task Parallel Library (TPL)" target="_blank">Task Parallel Library (TPL)</a> that was introduced with .Net 4. This post is not meant to be a TPL tutorial, but there are many sites explaining the concept. My personal favourite is <a href="http://www.pluralsight.com/training/Courses/TableOfContents/intro-async-parallel-dotnet4" title="Introduction to Async and Parallel Programming in .NET 4" target="_blank">"Introduction to Async and Parallel Programming in .NET 4"</a> by Dr. Joe Hummel. For now it is enough to say that using tasks (i.e. TPL library) can make handling your async scenarios easier and the async code more readable/maintainable.</li></ul><div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGno-PUIOGHV8DO7tCO_HSWsqsgOFPP8pyIYD-7zm-OU7TvhqHjk3ikjyjLUNj7OdPUKt5eZmuX_i8-xNgAYJ06_KlXiXV6zB2NDNWW3NoVrkLKEkhv39cOzLnEYta68uVW7BZdsAifr8/s1600/vs20102012.png" imageanchor="1" style="margin-left:1em; margin-right:1em; border-style: none;"><img border="0" height="115" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGno-PUIOGHV8DO7tCO_HSWsqsgOFPP8pyIYD-7zm-OU7TvhqHjk3ikjyjLUNj7OdPUKt5eZmuX_i8-xNgAYJ06_KlXiXV6zB2NDNWW3NoVrkLKEkhv39cOzLnEYta68uVW7BZdsAifr8/s400/vs20102012.png" /></a></div>
<p>The interesting thing is that since .Net 4.5 and its async/await feature you can easily benefit from using task-based async operation even in fully synchronous scenarios. You may start wondering what can be advantages of using async techniques & libraries in fully synchronous scenarios. The answer is: performance! If you are using regular synchronous methods, they block current thread until it receives the response. In case of task-based operations combined with async/await feature the thread is released to the pool while waiting for the service response.
<p>Obviously this advantage only applies in some scenarios e.g. in web applications running on IIS, where requests are handled in parallel by threads from the pool. If you are working a single threaded client app you will not benefit from this approach.
<h2>Sample code</h2>
<p>In this post I'll re-use the sample code from my <a href="http://fczaja.blogspot.com/2012/10/mocking-wcf-client-with-moq.html" title="Mocking WCF client with Moq">last post</a>. I'll convert synchronous calls to my sample web service so they use task-based async operations. So, let me remind you the interface of the web service that we will use:<pre class="brush: csharp">[ServiceContract]
public interface IStringService
{
[OperationContract]
string ReverseString(string input);
}</pre>Now, let's update the service client. There is actually not much conversion that needs to be done. First, you need to ensure that the generated code includes task-based async operations (right click on service reference -> "Configure service reference"). Once you have async operations generated, you need to transform the existing code to work with tasks and use async/await feature:<pre class="brush: csharp">public async Task<string> ReverseAsync(string input)
{
return await _client.ReverseStringAsync(input);
}</pre>Comparing to the original, purely synchronous method we have following changes:<ul>
<li><strong>Async</strong> keyword added to the method signature</li>
<li>Method return type changed to Task<string> (i.e. original return type 'wrapped' with Task)</li>
<li>Different service client method used: ReverseStringAsync instead of ReverseString</li>
<li><strong>Await</strong> keyword added before the client method call</li>
<li>"*Async" suffix added to the method name (recommended convention)</li>
</ul>These changes are enough to start taking advantage of async features of .net 4.5. We only need to update our tests, but there is only a few changes here as well.
<p>Updated integration test:<pre class="brush: csharp">[TestMethod]
public void TestStringHelper_Reverse()
{
StringHelper sh = new StringHelper();
string result = sh.ReverseAsync("abc").Result;
Assert.AreEqual("cba", result);
}</pre>Notice that I only added a call to .Result property as this time the method returns Task.
<p>And the unit test:<pre class="brush: csharp">[TestMethod]
public void TestStringHelper_Reverse()
{
// Create channel mock
Mock<IStringServiceChannel> channelMock = new Mock<IStringServiceChannel>(MockBehavior.Strict);
// setup the mock to expect the ReverseStringAsync method
channelMock.Setup(c => c.ReverseStringAsync("abc")).Returns(Task.FromResult("cba"));
// create string helper and invoke the ReverseAsync method
StringHelper sh = new StringHelper(channelMock.Object);
string result = sh.ReverseAsync("abc").Result;
Assert.AreEqual("cba", result);
//verify that the method was called on the mock
channelMock.Verify(c => c.ReverseStringAsync("abc"), Times.Once());
}</pre>The main thing to notice here is that we use Task.FromResult to create a task wrapping our sample result when mocking the client method.
<h2>Asp.Net MVC</h2>
I already mentioned that described approach will only be beneficial in some types of apps e.g. webapps running on IIS. A sample Asp.Net MVC4 controller action using our async client could look as follows:<pre class="brush: csharp">public async Task<ActionResult> Index()
{
ViewBag.Message = "Reversed 'abc' string: "
+ await _stringHelper.ReverseAsync("abc");
return View();
}</pre>Again, notice async and await keywords added to the action method, supported in MVC4.<br>A detailed tutorial for using aync/await with MVC4 can be found <a href="http://www.asp.net/mvc/tutorials/mvc-4/using-asynchronous-methods-in-aspnet-mvc-4" title="Using Asynchronous Methods in ASP.NET MVC 4" target="_blank">here</a>.
<p>Sample code for this post on my github:<br>
<a href="https://github.com/filipczaja/BlogSamples/tree/master/MockingWCFClientWithMoqAsync" target="_blank">https://github.com/filipczaja/BlogSamples/tree/master/MockingWCFClientWithMoqAsync</a>
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-81042257575919795032012-10-15T05:57:00.000-07:002012-10-15T12:26:53.295-07:00Mocking WCF client with MoqPerforming basic web service calls from your code using WCF is relativelly easy. All you have to do is add a new service reference to your project, pointing to the service url. WCF will automatically generate a client class for you, that you can use to call service methods.
<h2>The web service</h2>
<p>Let's say we have a web service that performs various string transformation e.g. reverses specified strings (obviously it's not a real life scenario, as you wouldn't normally call a web serviec to do that). The service implements the following interface:<pre class="brush: csharp">[ServiceContract]
public interface IStringService
{
[OperationContract]
string ReverseString(string input);
}</pre> In <a href="https://github.com/filipczaja/BlogSamples/tree/master/MockingWCFClientWithMoq" title="Code samples from my blog on gitHub" target="_blank">the sample code</a> for this post I created a basic WCF implementation of that service. However, the service itself doesn't need to be created using WCF, as long as it's using SOAP.
<h2>The client</h2>
The simplest class that consumes this service could look as follows:
<pre class="brush: csharp">public class StringHelper
{
StringServiceClient _client;
public StringHelper()
{
_client = = new StringServiceClient();
}
public string Reverse(string input)
{
return _client.ReverseString(input);
}
}</pre>The <code>StringServiceClient</code> is a class generated automatically when adding a service reference. All you have to do is istantiate it and call the chosen method.
<p>There is one issue with that approach though: you cannot unit test your <code>StringHelper.Reverse</code> method, without actually calling the web service (because classes are strongly coupled). When writing proper unit tests you should mock all the class dependencies, so you can only focus on a single unit of code. Otherwise it becomes an integration test.
<p>When using <a href="http://code.google.com/p/moq/" title="Moq homepage" target="_blank">Moq</a> you can only mock interfaces or virtual methods. The generated StringServiceClient doesn't implement any interface that would expose the service contract. Also, methods generated in that class are not virtual.
<p>Luckly enough the code generated when adding the service reference contains the Channel interface that we can use. The channel interface extends the service contract interface, so you can invoke all service methods using its implementation. This mean we can update the client app can to remove the tight coupling:<pre class="brush: csharp">
public class StringHelper
{
IStringServiceChannel _client;
public StringHelper()
{
var factory = new ChannelFactory<IStringServiceChannel>("BasicHttpBinding_IStringService");
_client = factory.CreateChannel();
}
public StringHelper(IStringServiceChannel client)
{
_client = client;
}
public string ReverseString(string input)
{
return _client.ReverseString(input);
}
}</pre>
As you can see, instead of working with the generated client we create a channel instance using ChannelFactory and the binding name "BasicHttpBinding_IStringService". The binding name can be found in the app.config file. The app.config file is automatically updated with WCF enpoint configuration when adding the service reference.
<h2>Testing</h2>
A simple integration test for our client code:<pre class="brush: csharp">[TestMethod]
public void TestStringHelper_Reverse()
{
StringHelper sh = new StringHelper();
string result = sh.Reverse("abc");
Assert.AreEqual("cba", result);
}</pre>This test would work with both versions of the client presented above.
<p>Now for the actuall unit tests that mocks the service channel object using Moq:<pre class="brush: csharp">[TestMethod]
public void TestStringHelper_Reverse()
{
// Create channel mock
Mock<IStringServiceChannel> channelMock = new Mock<IStringServiceChannel>(MockBehavior.Strict);
// setup the mock to expect the Reverse method to be called
channelMock.Setup(c => c.ReverseString("abc")).Returns("cba");
// create string helper and invoke the Reverse method
StringHelper sh = new StringHelper(channelMock.Object);
string result = sh.Reverse("abc");
Assert.AreEqual("cba", result);
//verify that the method was called on the mock
channelMock.Verify(c => c.ReverseString("abc"), Times.Once());
}</pre>
<p>Sample code for the service, client & test on github:<br> <a href="https://github.com/filipczaja/BlogSamples/tree/master/MockingWCFClientWithMoq" title="Code samples from my blog on gitHub" target="_blank">https://github.com/filipczaja/BlogSamples/tree/master/MockingWCFClientWithMoq</a>
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com3tag:blogger.com,1999:blog-7041745156199549192.post-74278577767562640772012-09-19T11:27:00.000-07:002012-09-19T11:28:51.468-07:00The simplest chat ever... just got simpler<p>In my <a title="The simplest chat ever with SignalR" href="http://fczaja.blogspot.com/2012/09/the-simplest-chat-ever-with-signalr-and.html" target="_blank">last post</a> I described a simple chat implementation using the SignalR framework. At the time of writing it I thought it’s really as simple and clean as it can get. However, my work colleague Lukasz Budnik proved me wrong by suggesting a simpler solution that they used while creating <a title="Hackaton homepage" href="http://hackaton.pl/" target="_blank">hackathon.pl</a> portal. He suggested replacing my server side components with a cloud hosted push notifications provider.</p>
<p class="MsoNormal">I admit I was thinking in a more traditional way, where all application components are hosted in-house, on servers owned by our company or our clients. Nowadays, it’s often a better idea to use exiting cloud hosted solutions than implementing some system components by ourselves. I definitely recommend considering this option if you have no important blockers preventing you from using SaaS solutions (e.g. like legal issues related to sending some data to third parties). Advantages that come with cloud are topic for another discussion though.</p>
<p class="MsoNormal">Let’s have a look how our chat application changes with this new approach. First of all, we need to choose a push provider. And yet again - Lukasz to the rescue with <a title="Bespoke applications with a flavour of a cloud" href="http://jee-bpel-soa.blogspot.com/2012/09/bespoke-applications-with-flavour-of.html" target="_blank">his latest post</a>, describing various SaaS solutions. I decided to go with the <a title="PubNub homepage" href="http://www.pubnub.com/" target="_blank">PubNub</a>, as its documentation already contains a chat example. So it’s a no-brainer, really.</p>
<p class="MsoNormal">The entire solution is now enclosed within a single html file. As mentioned at the beginning you don’t need to write any server code. The additional benefit is easier development and testing, as you don’t even need a web server, just your regular web browser.</p>
<p class="MsoNormal">The html file looks as follows. Please note that the html components are exactly the same as in my previous solution. In addition, PubNub allows you to bind events to DOM elements, so we don’t need jQuery in this example.</p>
<pre class="brush: html"><label for="nick">Your chat nickname:</label>
<input id="nick" name="nick" type="text" />
<label for="message">Message:</label>
<input id="message" maxlength="100" name="message" type="text" />
<div id="chatWindow"></div>
<div pub-key="MY_PUBLISHER_KEY" sub-key="MY_SUBSCRIBER_KEY" ssl="off" origin="pubsub.pubnub.com" id="pubnub"></div>
<script src="http://cdn.pubnub.com/pubnub-3.1.min.js"></script>
<script type="text/javascript"></script>
<script type="text/javascript">
(function () {
// declare some vars first
var chatWin = PUBNUB.$('chatWindow'),
nick = PUBNUB.$('nick'),
input = PUBNUB.$('message'),
channel = 'chat';
// subscribe to chat channel and define a method for handling incoming messages
PUBNUB.subscribe({
channel: channel,
callback: function (message) {
chatWin.innerHTML = chatWin.innerHTML + message;
}
});
// submit message to channel when Enter key is pressed
PUBNUB.bind('keyup', input, function (e) {
if ((e.keyCode || e.charCode) === 13) {
PUBNUB.publish({
channel: channel,
message: nick.value + ': ' + input.value
});
}
});
})();
</script></pre>Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-29185344613245536172012-09-13T14:17:00.000-07:002012-09-15T11:38:10.883-07:00The simplest chat ever with SignalR (and Asp.Net MVC)Lately I was investigating some technologies for my current project and at some point I was advised to check out the <a href="http://signalr.net/" title="SignalR homepage" target="_blank">SignalR</a> library. I admit I’ve never heard about it before (shame on me). It was mentioned to me in the context of asynchronous web applications, with the main example being a web chat. Hmm, an interesting exercise I thought! Plus it should give me the general framework understanding. Let’s do it!
<p>It was only later that I realised the chat implementation is the most common example for SignalR usage that you can find on the Web. One more can hurt though, so here is my version :)
<h2>Overview</h2>
In short, the app works in the following way: people can visit a single chat url and they join our chat automatically. This mean they will see all the messages posted to the chat since they have joined it. They can also post messages that will be seen by other chat members.
<p>I’ve built it as part of a sample MVC4 application, although this could probably be any Asp.Net page, as I’m not using any MVC functionality at all. Two most important parts of this app are the view and the hub. The view displays chat components and handles user actions (i.e. sending messages to the hub). The hub listens for messages and publishes them to all connected clients.
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicAKAGv4EBdLNiasuWu0Fe5v9k12kRAVd1Za0QQV1yXScmBHiHPFeRW1AzBgq02LNc7r9iqjQw7gqXYyARr7zrQH5MnoJGl7e6hTTf3OMqnihK7WAhbJKTcTYO_XhIG7t7mpiRDNqdeYA/s1600/chat_hub.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="200" width="185" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicAKAGv4EBdLNiasuWu0Fe5v9k12kRAVd1Za0QQV1yXScmBHiHPFeRW1AzBgq02LNc7r9iqjQw7gqXYyARr7zrQH5MnoJGl7e6hTTf3OMqnihK7WAhbJKTcTYO_XhIG7t7mpiRDNqdeYA/s200/chat_hub.png" /></a></div>
<h2>The code</h2>
You will need to downlaod the SignalR library for the following code to work. The easiest way to do that is by searching for 'SignalR' in your Nuget package manager.
<p>So I have my Chat controller, which is only used to display the view. As you can see it’s doing nothing else, so the view could probably be a static html page instead:
<pre class="brush: csharp">public class ChatController : Controller
{
public ActionResult Index()
{
return View();
}
}</pre>
<p>It’s a web chat, so we need some input fields for nickname & message. We also need an area for displaying our conversation. All these belong naturally to our view:
<pre class="brush: html; wraplines: true"><label for="nick" >Your chat nickname:</label>
<input type="text" name="nick" id="nick" /><br />
<label for="message" >Message:</label>
<input type="text" name="message" id="message" maxlength="100" /><br />
<div id="chatWindow"></div></pre>
<p>Now that we have basic visual components, let’s make them work. Firstly, we need to reference jQuery & SignalR libraries. I have those defined as bundles in my MVC app, but you can reference all files directly:
<pre class="brush: html">@Scripts.Render("~/bundles/jquery")
@Scripts.Render("~/bundles/SignalR")
<script src="/signalr/hubs" type="text/javascript"></script></pre>
Notice the third reference - we reference a script, that doesn’t physically exist. SignalR will handle that script request, generating in response javascript code allowing us to communicate with the chat hub. The /signalr part of the uri is configurable.
<p>Now it’s time for the javascript code (see comments for explanation):<pre class="brush: js">$(function () {
// get the reference to chat hub, generated by SignalR
var chatHub = $.connection.chat;
// add a method that can be called by the hub to update the chat
chatHub.publishMessage = function (nick, msg) {
var chatWin = $("#chatWindow");
chatWin.html(chatWin.html() + nick + ": " + msg );
};
// start the connection with the hub
$.connection.hub.start();
$(document).keypress(function (e) {
if (e.which == 13) {
// when the 'Enter' key is pressed, send the message to the hub
chatHub.sendMessage($("#nick").val(), $("#message").val());
$("#message").val("");
}
});
});</pre>
The hub code could not be any simpler. We need to implement a method that can be called by a client wishing to send a message. The method broadcasts that message to all connected clients:<pre class="brush: csharp">public class Chat : Hub
{
public void SendMessage(string nick, string message)
{
Clients.PublishMessage(nick, message);
}
}</pre>
In case you haven't noticed yet: we execute a hub server method from our client javascript code (see line 17 of js code) and the other way around i.e. client side js function from our C# hub code (see line 5 of hub code). How cool is that?!?
<p>I must say I'm pressed how easy it is to use for programmers. If you are wondering how it's working under the hood I recommend you reading SignalR documentation on <a href="https://github.com/SignalR/SignalR" title="SignalR on GitHub" target="_blank">their GitHub page</a>.
And finally, here's the screenshot I've taken testing that chat using 2 different browsers:
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhx9BzGDjO17VX9JLw0IFlIsj-WruvbINzkfFB767hgLxKNFhrzhs_Ypu4ZiVmjmaf4pS9KTOzARZ9G0XcX34189lmEgXCkrtF6uh6ubjKXcknUOJ1zFn2wMpknRMaOYvsoCR_REb5VFk/s1600/chat.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="327" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhx9BzGDjO17VX9JLw0IFlIsj-WruvbINzkfFB767hgLxKNFhrzhs_Ypu4ZiVmjmaf4pS9KTOzARZ9G0XcX34189lmEgXCkrtF6uh6ubjKXcknUOJ1zFn2wMpknRMaOYvsoCR_REb5VFk/s400/chat.png" /></a></div>
You can download my VC2012 solution from here: <a href="http://www.myskymap.com/download/blog/BlogSamples_SignalR_Chat.zip" title="Solution files" target="_blank">DOWNLOAD</a>.
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com1tag:blogger.com,1999:blog-7041745156199549192.post-26383780173568490952012-09-10T11:37:00.002-07:002012-09-10T14:19:34.649-07:00AdMob ads in PhoneGap appsAdding ads to your free mobile apps is one of the most common way of earning money in this business. There are multiple ads engine providers, but Google is still the leader. Google ads provider for mobile devices is called <a href="http://www.admob.com/" title="AdMob homepage" target="_blank">AdMob</a>. It offers easy integration with Android, iPhone/IPad and WP7.
<p>While integration with native apps is quite straightwforwrd and well documented, things get more complicated if you create your apps using <a href="http://phonegap.com/" title="PhoneGap homepage" target="_blank">PhoneGap</a> and HTML5. In the past Admob offered "Smartphone Web" option, that could be presented within your HTML code for smartphone devices. Since May 2012 it's no longer available, as Google wants to make a clear division: use AdMob for purely mobile apps and AdSense for web pages.
<p>Since mobile apps created using PhoneGap are actually web pages displayed within the native app and rendered using default browser engine it seems that AdSense is the way to go. It doesn't work though, with one of the reasons being that Google wants to crawl the sites it displays AdSense ads on.
<h2>Solution</h2>
So how do we display Google ads in HTML code that belongs to our PhoneGap app?<br>
The answer is: WE DON'T! :)
<p>Instead, we modify our native app container, that displays the HTML. Default PhoneGap templates for all systems (Android, WP7, iOS, ..) create a native, full-screen container for displaying your web app. We can shrink that main container and then display ads on the remaining free space using native AdMob SDK.<div class="separator" style="clear: both; text-align: center;">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_Jim_fbWqV1PIK9GpEt52s6KL5zFWna0vZ0ytzunyIuwYUROJT-8ogVAt6W1OBr9woZIC3Q6i6GTe8w86cGPYUrq7X2aOlXKEX7gxT_wzUV-jE-PvB1giidZO9MNRyPwXHU5gnRtU9BU/s1600/AdMob.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="229" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_Jim_fbWqV1PIK9GpEt52s6KL5zFWna0vZ0ytzunyIuwYUROJT-8ogVAt6W1OBr9woZIC3Q6i6GTe8w86cGPYUrq7X2aOlXKEX7gxT_wzUV-jE-PvB1giidZO9MNRyPwXHU5gnRtU9BU/s400/AdMob.png" /></a></div></div>
<p>The only limitation of this method is that you cannot place your ad inside of your actual HTML appliaction. However, you can still place your ads in most commonly used spaces i.e. header or footer.
<h2>Android example</h2>
The sample code below presents the MainActivity of an Android app that uses PhoneGap. The code adds AdMob banner at the bottom of a mobile app:<pre class="brush: java">import android.os.Bundle;
import org.apache.cordova.*;
import android.widget.LinearLayout;
import com.google.ads.*;
public class MainActivity extends DroidGap {
private static final String MY_AD_UNIT_ID = "YOUR_AD_UNIT_IT";
private AdView adView;
@Override
public void onCreate(Bundle savedInstanceState) {
// Loading your HTML as per PhoneGap tutorial
super.onCreate(savedInstanceState);
super.loadUrl("file:///android_asset/www/index.html");
// Adding AdMob banner
adView = new AdView(this, AdSize.BANNER, MY_AD_UNIT_ID);
LinearLayout layout = super.root;
layout.addView(adView);
adView.loadAd(new AdRequest());
}
}</pre><h2>WP7 example</h2>
The sample code below presents default XAML created by PhoneGap template for WP7, modified to display AdMob banner at the bottom of a mobile app.
<pre class="brush: xml; ruler: true;"><Grid x:Name="LayoutRoot" Background="Transparent" HorizontalAlignment="Stretch">
<Grid.RowDefinitions>
<RowDefinition Height="*"></RowDefinition>
<RowDefinition Height="75"></RowDefinition> <!-- Extra grid row for AdMob -->
</Grid.RowDefinitions>
<!-- Original container displaying your HTML -->
<my:CordovaView Grid.Row="0"
HorizontalAlignment="Stretch"
Margin="0,0,0,0"
x:Name="CordovaView"
VerticalAlignment="Stretch">
</my:CordovaView>
<!-- Our AdMob banner: -->
<google:BannerAd Grid.Row="1"
AdUnitID="YOUR_AD_UNIT_ID"
xmlns:google="clr-namespace:Google.AdMob.Ads.WindowsPhone7.WPF;assembly=Google.AdMob.Ads.WindowsPhone7">
</google:BannerAd>
</Grid></pre>
Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com8tag:blogger.com,1999:blog-7041745156199549192.post-28076326092040521312012-09-06T05:01:00.001-07:002012-09-06T05:07:49.985-07:00jQuery Mobile apps flicker on page transitionsFlickering screen on page transtions seems to be a common issue in mobile applications created using <a href="http://jquerymobile.com" title="jQuery Mobile homepage" target="_blank">JQuery Mobile</a> (and most likely packed with <a href="http://phonegap.com" title="PhoneGap homepage" target="_blank">PhoneGap</a>). The most common solution you will find on the Web is to add the following piece of CSS to your html:<pre class="brash: css">
.ui-page {
-webkit-backface-visibility: hidden;
}</pre>The problem with that solution is that it breaks Select list on your forms (and apparentally any other form input fields as well) on Android system. This makes your forms unusable.
<p>Some people create an additional workaround for that issue i.e. they change this style property direct before the page transition occures and disable it right after it completes. A bit messy, don't you think?
<p>Luckly, there is a much simpler solution. I realised that this flicekring is caused by the default transition effect used i.e. 'fade' for page transitions and 'pop' for dialogs. The simplest way to fix it seems to disable any effects of page/dialog transitions. Here is how you can do that with a little javascript code:<pre class="brash: js">
$(document).bind("mobileinit", function(){
$.mobile.defaultDialogTransition = "none";
$.mobile.defaultPageTransition = "none";
});</pre>Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-49497637064065677102012-08-20T05:58:00.001-07:002012-08-20T06:02:12.516-07:00What to remember about when adding other languages to your websiteSo you want to go international with your website, huh? In this post I'll try to summarize all the possible changes that you will have to go through. The complexity of this upgrade will obviously depend on the current design, framework you use etc. The sooner you start thinking about it the better, as this will save you some mundune work in future.
<p>Unfortunetally translating your website is not always as easy as creating an alternative language file. I've learned that recently while translating my Polish project <a href="http://www.dzienniklotow.pl" title="Dziennik Lotów homepage">www.DziennikLotow.pl</a> into an English version <a href="http://www.myskymap.com" title="My Sky Map homepage">www.MySkyMap.com</a>.
<p>The checklist below may help you to remember about some important tasks that need to be completed as part of translation process:
<h2>Domain name</h2>
Before you introduce a new language you have to think if you need a new domain as well. You will probably get away with a single domain in case it's one of the common, worldwide recognized domains, like .com, .net etc. In that case you still need to define your strategy for serving translated content for users accessing your website for the first time. There are several options here:
<ul>
<li><strong>Default language</strong><br>
Until visitors chooses to change their language you serve the content using a default one e.g. English</li>
<li><strong>Browser settings</strong></br>
You can check the default browser language set by the user and serve translated content basing on that (if available)</li>
<li><strong>Language specific url</strong><br>
When advertising your website you can use links that have the language defined as part of the url e.g. http://www.yourdomain.com/en/ or using subdomains e.g. http://www.en.yourdomain.com</li>
</ul>Once the user selects the language manually or logs in you have more options e.g. storing preferred language in user session, passing additional url params etc.
<p>In the situation when you have a country specific domain (like .pl) and you would like to expand, it is probaly better to register a domain with a common ending (like .com) for the new language version. You can then determine the language to use depending on the domain name. The small disadvantage of this approach is that it may not be possible to change website language without loosing session data, unless you have some CDSSO (Cross-Domain Single Sign On) implemented.
<p>Alternativelly after registering a global domain you can abandon the local one (or make it redirect to global) and then use mechanisms described above.
<h2>Static texts</h2>
Translating the text content is usually the first part that you think about when starting to add support for a new language. This covers not only text blocks, but also things like navigation menus, alerts, image alternative texts, page meta information (keywords, desc, etc) and many more. Everything that is a part of a HTML response and is language-specific should get transleted.
<p>Most of the modern web applications frameworks support creating different language versions out-of-the-box (OOTB). However, it is developer`s responsibility to make use of the internationalization (I18N) functionality offered by the engine they use. This has to be thought about from the very begining, so you don't end up with string that need to be translted hard-coded in your application.
<p>The most common way of achieving I18N is to create separate files for each language, that contain all possible static text content that can be displayed by your website. Each file usually contains text messages that can be identified by a uniqe key. For example, this is how simple language files for English & Polish languages could look like: <pre class="brush: bash">ThankYou = Thank you
Goodbye = Goodbye
Please = Please</pre>
<pre class="brush: bash">ThankYou = Dziękuję
Goodbye = Do widzenia
Please = Proszę</pre>
Note that the same keys are used to identify the messeges in both files.
<p>As a popular alternative you could store all messages that require translation in a databse. See <a href="http://fczaja.blogspot.com/2010/08/multilanguage-database-design.html" title="Multilingual database design approaches">my other post</a> on that. I believe this appraoch is more complex than language files, and therefore I only use if it's really required.
<p><strong>Warning: Javascript</strong> - if you reference any static javascript files that include some messages that need to be translated you will have troubles using default I18N mechanism in most of the frameworks. One possible solution is to serve those files dynamically and injecting the translted messages before returning the server response with javascript content. You will find more alternatives on the Web.
<h2>Graphics</h2>
In general, it is not a good practice to present any text content using graphics. It is because graphics take more time to load than a regular text and require additional HTTP requests (they can be cached but still). Good graphic designers remember about that when creating their desings. However, it still may happen that graphics on your website present some text e.g. logo, fancy menu etc.
<p>If your website contains any graphics presenting language specific texts you will need to create alternative versions for each language. You will also need to create mechanism for displaying them depending on the current language. That part can be easily achieved with the language files used for static text content.
<h2>Data formats</h2>
When creating a localized version of a website content you should also care about data formats used in the country you are preparing content for. Common elements that can be presented using different formats depending on the country are numbers, date & time, money amounts etc.
<h2>Updates</h2>
Nowadays most of the popular websites share some updates with their users on a regular basis. Most of them has its "News" or "Blog" sections, so does probably yours. When you add a new language support to your website, it is important to choose your strategy. Basically you have 2 main options:<ul><li><strong>Default language only</strong><br>
If you assume that a vast majority of your users either uses the default language or at least understands it you can write all the updates in it e.g. in English</li>
<li><strong>Language specific updates</strong><br>
If you can afford the time required to translate all your updates it will be always appriciated by your users if you create different language versions for each update. However, don't use automatic translation because this may have exact opposite result. Again, most of the frameworks used for posting content (e.g. Wordpress) support serving multiple language versions.</li></ul>
<h2>Changing language</h2>
When adding support for multiple languages you will obviously need a widget for changing the language. There are plenty of types (with flags, with language names etc). The widget is usually placed in the top right corner of your page. Altghough some types looks 'cooler' you have to rememebr about accesibility. In most cases simple solutions are the best ones.
<p>Remember that except the UI part the widget will need to work with your language selection mechanism on the server side.
<h2>Other stuff...</h2>
The aspects of i18n described above are the most basic ones and apply in most cases. Except those, there are also challenges more specific to you website. In my case these were:<ul>
<li><strong>Rewriting Urls</strong><br>Since I have a different domain for a new langauge I needed to add url rewriting rules to .htaccess file, so the website works ok for the new domain.</li>
<li><strong>Emails</strong><br>I had to translate existing email templates.</li>
<li><strong>Facebook Connect</strong><br>When configuring Facebook connect you provide app Id. That app is specific to a single domain, so I neede to create a separat facebook app for my new domain.</li>
<li><strong>Facebook Fanpage</strong><br>The same situation as described in "Updates" section i.e. do you want to have 2 separate fan pages and create a single one with content in the default language?</li>
<li><strong>Analytics</strong><br>I've setup a new google analytics site to separate stats coming from different domains.</li></ul>
As you can see the process of adding support for a new language may require much mroe work than it seems at the begining. I admit that I still have somethings to do, as I implemented only the most important changes.Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-22599921831700192182012-08-20T05:33:00.000-07:002012-08-20T05:33:22.635-07:00SCRUM or SCRUMish?Last week a bunch of us attended an external SCRUM training here in Gdansk. The main reason for me for participating was to systematize my Scrum knowledge. I already knew some basics but I don’t have any practical experience with real Scrum project. I’ve never worked on such in Kainos, although some of them “borrowed” some Scrum rules.
<p>Except the basics, I was hoping to hear about real life examples and best practices. Also, I was keen to learn about biggest challenges when adopting Scrum.
<p>Our trainer was a certified Scrum expert and practitioner. He claimed he has introduced SCRUM to his current company and they are successfully using it for all of their software development projects ever since. So he seemed to be the right person for the job.
<p>Now to the actual training - it was a bit too theoretical for my liking. It consisted of a lot of SCRUM theory, charts and stats (over 200 slides if I recall correctly). Probably nothing that you wouldn’t find on the web though. In my opinion it lacked some practical exercises, workshops, detailed case studies, etc.
<p>However, if something was unclear our trainer tried to explain it by providing examples from his personal experience as a Scrum Master. Multiple times he started explaining a concept with words "In my company we are doing it like this...". And this is actually what got me wondering, if a pure Scum is even possible.
<p>Although the trainer was obviously a great Scrum enthusiast he admitted multiple times that very often they need to adjust the process, so it is actually not strictly following Scrum guidelines. As the main reason for that he named customer expectations and the market/economy factors. Some examples of this inconsistency would be:<ul>
<li>not using unit tests because their client doesn’t want to pay for them</li>
<li>having some scrum team members only partially involved in the Sprint e.g. testers, graphic designers etc</li></ul>Some of their adjustments I found justified (graphics involved only part time) and some not (lack of unit tests). However, I have no Scrum experience and only know some theory. If I were responsible for adapting Scrum in one of the projects how could I know which rules can I safely change? This is obviously a question of common sense, plus I could use experience of my colleges, but this still leaves me with some doubts. Is it still Scrum, or only Scrumish? I appreciate the flexibility coming from the "whatever works best" approach, but those rules were invented for a reason and there is a danger of me not seeing some hidden pitfalls.
<p>Is there anybody out there who can say they participated in a pure SCRUM project and followed all the rules? I’m really interested if this is possible.
<p>Or maybe you just treat SCRUM as a loose set of guidelines and you pick only the most valuable ones for you?
<p>PS. The highlight of the training was when one of our Technical Architects commented the lack of unit testing in the projects led by our trainer with words that can be translated as: “That’s SO LAME!!!”. The comment was repeated later on multiple times by other attendees on other occasions. We all agreed it should belong to official SCRUM terminology ;)Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com0tag:blogger.com,1999:blog-7041745156199549192.post-65057068980999740462012-07-03T01:53:00.000-07:002012-07-03T01:53:12.667-07:00Continuous Integration with CRM Dynamics 2011Lately my work colleague Thomas Swann <a href="http://tswann.posterous.com/dynamics-crm-2011-summer-update" title="Dynamics CRM 2011 Summer Update">described a bunch of tools</a> that would make the life of the Crm Dynamics 2011 developer much easier. In this blog post I will show a practical example of how we can use them to automate the build & deployment and enable continuous integration in our Dynamics project.
<p>We will create a MSBuild task that packages a CRM solution source into a single package (using Solution Packager) and then deploys it to CRM server using <a href="http://www.microsoft.com/en-us/download/details.aspx?id=24004" title="CRM Dynamics SDK download">CRM SDK</a>. The example will also allow to revert the entire process i.e. export a solution package from CRM and unpack it.
<h2>CRM access layer</h2>
Let's start by creating a layer for communicating with CRM using SDK. You will need to reference microsoft.xrm.sdk.dll and microsoft.xrm.sdk.proxy.dll assemblies to make the code compile.<pre class="brush: csharp">/// <summary>
/// Class used to establish a connection to CRM
/// </summary>
public class CrmConnection
{
private Uri _organizationUrl;
private ClientCredentials _credentials;
private OrganizationServiceProxy _service;
public CrmConnection(string organizationUrl, string username, string password)
{
_credentials = new ClientCredentials();
_credentials.UserName.UserName = username;
_credentials.UserName.Password = password;
this._organizationUrl = new Uri(organizationUrl);
}
public IOrganizationService Service
{
get
{
if (_service == null)
{
_service = new OrganizationServiceProxy(
_organizationUrl, null, _credentials, null);
_service.ServiceConfiguration.CurrentServiceEndpoint
.Behaviors.Add(new ProxyTypesBehavior());
_service.Authenticate();
}
return _service;
}
}
}
/// <summary>
/// CRM Solution Manager for performing solution operations
/// </summary>
public class SolutionManager
{
IOrganizationService _service;
public SolutionManager(IOrganizationService service)
{
_service = service;
}
/// <summary>
/// Imports a solution to CRM server
/// </summary>
/// <param name="zipPath">Path to solution package</param>
public void ImportSolution(string zipPath)
{
byte[] data = File.ReadAllBytes(zipPath);
ImportSolutionRequest request =
new ImportSolutionRequest() { CustomizationFile = data };
Console.WriteLine("Solution deploy started...");
_service.Execute(request);
Console.WriteLine("Solution deployed");
}
/// <summary>
/// Exports a solution package from CRM and saves it at specified location
/// </summary>
/// <param name="solutionName">Name of the solution to be exported</param>
/// <param name="zipPath">Path to save the exported package at</param>
public void ExportSolution(string solutionName, string zipPath)
{
ExportSolutionRequest request = new ExportSolutionRequest()
{
SolutionName = solutionName,
Managed = false
};
Console.WriteLine("Solution export started...");
ExportSolutionResponse response =
(ExportSolutionResponse)_service.Execute(request);
File.WriteAllBytes(zipPath, response.ExportSolutionFile);
Console.WriteLine("Solution successfully exported");
}
}</pre>This gives us a CRM access layer that we can use in our code (not only in msbuild task code). It allows us to import and export packages from CRM and save them to disk at specified location.
<h2>Custom MsBuild task</h2>
Now it's time to create custom MSBuild tasks that would utilize the SolutionManager described above. Let's start by introducing a common base for CRM tasks:<pre class="brush: csharp">/// <summary>
/// Base class for CRM tasks, including all details required to connect to CRM
/// </summary>
public abstract class CrmSolutionTask : Microsoft.Build.Utilities.Task
{
[Required]
public string OrganisationUrl { get; set; }
[Required]
public string Username { get; set; }
[Required]
public string Password { get; set; }
[Required]
public string ZipPath { get; set; }
protected SolutionManager SolutionManager
{
get
{
CrmConnection connection =
new CrmConnection(OrganisationUrl, Username, Password);
return new SolutionManager(connection.Service);
}
}
}</pre>All the public properties from that class will be available as task parameters and are common for both tasks. Now let's create the Import tasks:<pre class="brush: csharp">public class ImportSolutionTask : CrmSolutionTask
{
public override bool Execute()
{
try
{
this.SolutionManager.ImportSolution(ZipPath);
}
catch (Exception e)
{
Log.LogError("Exception while importing CRM solution: " + e);
return false;
}
return true;
}
}</pre>
The ExportSolutionTask is actually very similar. Note that it defines an additional public property, which will also be used as task parameter, specific to that task only.
<pre class="brush: csharp">public class ExportSolutionTask : CrmSolutionTask
{
[Required]
public string SolutionName { get; set; }
public override bool Execute()
{
try
{
this.SolutionManager.ExportSolution(SolutionName, ZipPath);
}
catch (Exception e)
{
Log.LogError("Exception while exporting CRM solution: " + e);
return false;
}
return true;
}
}</pre>
<h2>MsBuild script</h2>
Now that we have our custom build tasks coded let's make use of them in the MsBuild script. The following build script will be stored in CRM.build file and assumes we keep our custom tasks in "BuildTasks" project.<pre class="brush: xml"><Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<!-- Make use of custom tasks -->
<UsingTask TaskName="ImportSolutionTask"
AssemblyFile="\BuildTasks\bin\$(Configuration)\BuildTasks.dll" />
<UsingTask TaskName="ExportSolutionTask"
AssemblyFile="\BuildTasks\bin\$(Configuration)\BuildTasks.dll" />
<PropertyGroup>
<!-- Server to use when deploying solution usign msbuild tasks -->
<CRMDeploymentServer>SERVER_NAME</CRMDeploymentServer>
<!-- Server to use when downloading solution from a CRM server -->
<CRMDownloadServer>SERVER_NAME</CRMDownloadServer>
<CrmOrganisationUrl>ORGANIZATION_URL</CrmOrganisationUrl>
<CrmAdministrator>CRM_ADMIN_USERNAME</CrmAdministrator>
<CrmPassword>CRM_ADMIN_PASSWORD</CrmPassword>
<CrmSolutionName>CRM_SOLUTION_NAME</CrmSolutionName>
<CrmSolutionsFolder>$(MSBuildProjectDirectory)\CrmSolutions</CrmSolutionsFolder>
<CrmSolutionZip>$(CrmSolutionsFolder)\$(CrmSolutionName).zip</CrmSolutionZip>
<!-- Tools folder where SolutionPackager is kept -->
<ToolsFolder>$(MSBuildProjectDirectory)\Tools</ToolsFolder>
<SolutionPackager>$(ToolsFolder)\SolutionPackager.exe</SolutionPackager>
<Configuration>Debug</Configuration>
</PropertyGroup>
<!-- Build custom MsBuild tasks project-->
<Target Name="BuildTaskDll">
<ItemGroup>
<BuildTasksProjects Include="BuildTasks\BuildTasks.csproj;" />
</ItemGroup>
<Message Text="Building custom build tasks dll"/>
<MSBuild Projects="@(BuildTasksProjects)" Targets="Clean" />
<MSBuild Projects="@(BuildTasksProjects)" Targets="Rebuild"
Properties="Configuration=$(Configuration)" />
</Target>
<!-- Pack/Extract CRM solution -->
<!-- These targets use the SolutionPackager tool -->
<Target Name="PackSolution">
<Message Text="Packaging solution '$(CrmSolutionName)'"/>
<Exec Command="$(SolutionPackager) /action:Pack /zipfile:$(CrmSolutionZip)
/folder:$(CrmSolutionsFolder)\$(CrmSolutionName)" />
</Target>
<Target Name="ExtractSolution">
<Message Text="Extracting solution '$(CrmSolutionName)'"/>
<Exec Command="$(SolutionPackager) /action:Extract /zipfile:$(CrmSolutionZip)
/folder:$(CrmSolutionsFolder)\$(CrmSolutionName)" />
</Target>
<!-- Import/Export solution zip file -->
<!-- These targets use CRM SDK via our custom tasks -->
<Target Name="ImportSolution" DependsOnTargets="BuildTaskDll">
<Message Text="Importing solution '$(CrmSolutionName)'"/>
<ImportSolution
OrganisationUrl="$(CRMDeploymentServer)$(CrmOrganisationUrl)"
Username="$(CrmAdministrator)" Password="$(CrmPassword)"
ZipPath="$(CrmSolutionZip)" />
</Target>
<Target Name="ExportSolution" DependsOnTargets="BuildTaskDll">
<Message Text="Exporting solution '$(CrmSolutionName)'"/>
<ExportSolution
OrganisationUrl="$(CRMDownloadServer)$(CrmOrganisationUrl)"
Username="$(CrmAdministrator)" Password="$(CrmPassword)"
ZipPath="$(CrmSolutionZip)" SolutionName="$(CrmSolutionName)" />
</Target>
<!-- Download/Deploy solution -->
<Target Name="DownloadSolution">
<CallTarget Targets="ExportSolution; ExtractSolution" />
</Target>
<Target Name="DeploySolution">
<CallTarget Targets="PackSolution; ImportSolution" />
</Target>
</Project></pre>To pack and deploy a solution to CRM run the "DeploySolution" target:<pre class="brush: bash">msbuild CRM.build /t:DeploySolution</pre>To download and extract CRM solution run the "DownloadSolution" target:<pre class="brush: bash">msbuild CRM.build /t:DownloadSolution</pre>Note that you can override all MsBuild properties from the command line when running that script.
<h2>Continuous Integration</h2>
We are closely coming to the end of this post and you have probably noticed that, despite the post title, I haven't mentioned the CI part yet. Well, how do you use described MSBuild tasks in your CI process is really up to you and your project needs. In my current project we are storing an extracted solution in our code repository. Our CI server is configured to run the "DeploySolution" target after each change to that source i.e. the code is packed and imported to our CRM test server. This assures that the CRM test server always uses the latest version of that solution.
<p>Developers who work on the solution on their own CRM instances can use the "DownloadSolution" target to automatically obtain the updated solution package and extract it, so they don't have to do that manually.
<p>Our Import and Export tasks can also be used to automate the process of moving solutions between environments.Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com2tag:blogger.com,1999:blog-7041745156199549192.post-82184518977650127842012-07-01T10:53:00.000-07:002013-05-22T14:27:01.515-07:00IdP initiated SSO and Identity Federation with OpenAM and SAML - part IVThis is the last part of the tutorial describing how to configure IdP initiated SSO and Identity Federation with OpenAM and SAML. The tutorial consists of 4 parts:
<ol>
<li><a href="http://fczaja.blogspot.com/2012/06/idp-initiated-sso-and-identity.html" title="IdP initiated SSO and Identity Federation with OpenAM and SAML - part I">Basic concepts & use case overview</a></li>
<li><a href="http://fczaja.blogspot.com/2012/06/idp-initiated-sso-and-identity_21.html" title="IdP initiated SSO and Identity Federation with OpenAM and SAML - part II">Sample environment configuration with OpenAM</a></li>
<li><a href="http://fczaja.blogspot.com/2012/06/idp-initiated-sso-and-identity_22.html" title="IdP initiated SSO and Identity Federation with OpenAM and SAML - part III">Using OpenAM SAML services</a></li>
<li><strong>Detailed look at SAML interactions</strong></li>
</ol>
If you don't understand any terms or abbreviations that I use here please read <a href="http://fczaja.blogspot.com/2012/06/idp-initiated-sso-and-identity.html" title="IdP initiated SSO and Identity Federation with OpenAM and SAML - part I">the first part of the tutorial</a> together with the <a href="http://docs.oasis-open.org/security/saml/Post2.0/sstc-saml-tech-overview-2.0-cd-02.pdf" target="_blank" title="Security Assertion Markup Language (SAML) V2.0 Technical Overview">Security Assertion Markup Language (SAML) V2.0 Technical Overview</a>.
<br><br>
<h1>Detailed look at SAML interactions</h1>
At this stage you should have working IdP and SP test environments, both configured using OpenAM. You should also have a sample ProviderDashboard web application that uses SAML functionality exposed by OpenAM. All SAML operations are triggered by hyperlinks placed within that web application that point to specific OpenAM services.
<p>At the end of previous chapter we described steps to verify if our Identity Federation and SSO processes are working correctly. You have probably noticed some browser redirections or page refreshes when performing verification tests. However, you would probably like to know what is exactly happening behind the scenes.
<p>In this chapter I will explain how a browser communicates with the Idp (Identity Provider i.e. ProviderDashboard) and SP (Service Provider i.e. IssueReporter) during SSO and identity federation process. Please note that in my examples I'm using HTTP POST binding for sending assertions. Remember that Artifact binding is also available in OpenAM. For more details about SAML binding please refer to SAML technical overview linked above.
<h2>Identify federation and SSO</h2>
The following sequence diagram presents communication (happy path) between browser, IDP and SP while performing the SSO with initial identity federation i.e. user requests SP access from IDP for the first time (identities have not been previously linked). All requests names begin with HTTP request type used (GET or POST) and responses names begin with HTTP status code returned.
<br><br>
<center><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhE1T7kgZP7m3bWlu9MS0NjqRgNZcF-0to8LGoSnegwlFKWei6writHTYxp7RTcixEYq7-07PA6TMbbg9gvGXyse-gHruhrJUT7Ll-YOEh91-B0NPFI62VeFwrS3rhU32ocsWLJAODuDXc/s1600/blog_identity_federation_and_sso.png" imageanchor="1" style="margin-left:1em; margin-right:1em" title="Identity federation and SSO sequence diagram"><img border="0" height="400" width="370" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhE1T7kgZP7m3bWlu9MS0NjqRgNZcF-0to8LGoSnegwlFKWei6writHTYxp7RTcixEYq7-07PA6TMbbg9gvGXyse-gHruhrJUT7Ll-YOEh91-B0NPFI62VeFwrS3rhU32ocsWLJAODuDXc/s400/blog_identity_federation_and_sso.png" /></a></center>
<p><strong>IdP initial authentication</strong><br>
This part of the flow covers regular authentication procedure for ProviderDashboard, that comes out-of-the-box after we configure an OpenMA agent to protect it. It consists of 4 HTTP requests:
<ol>
<li>Unidentified user tries to access a protected resource (Provider dashboard). Agent doesn’t recognize the user so it sends redirection response</li>
<li>User is redirected to login screen</li>
<li>User provides their ProviderDashboard credentials (12345) and submits the login form. IdP OpenAM validates provided credentials, create the authentication cookie and redirects the user to protected resource.</li>
<li>Browser requests protected dashboard, agent recognizes the user and let them through.</li>
</ol>
<p><strong>SSO initiation</strong><br>
At this stage we have a user authenticated with ProviderDashboard (IdP). Somewhere within the dashboard there is a hyperlink named “Report an issue”, initiating the identity federation and SSO with IssueReporter (SP). The hyperlink points to the idpssoinit endpoint exposed by OpenAM installation used for ProviderDashboard and described in detail in <a href="http://fczaja.blogspot.com/2012/06/idp-initiated-sso-and-identity_22.html#IDPSSOInit" title="Previous chapter: IDPSSOInit service">previous chapter</a>.
<p>When the user clicks the described hyperlink OpenAM generates the SAML assertion that will be send to the configured SP. If you would like to see the content of the generated assertion you can check OpenAM logs at:<pre><openam_conf_dir>\openam\debug\Federation</pre>You need to ensure you have set OpenAM Debug level to 'Message' level. All the assertion`s elements are explained in SAML technical overview.
<p>When the SAML assertion is created the idpssoinit endpoint includes it in the HTTP response that is send back to the browser as result of clicking the hyperlink. The response contains an HTML form with a small piece of javascript, that causes the form to be automatically submitted (by POST http request) when browser receives that response. Sample response body can look as follows:<pre class="brush: html"><HTML>
<HEAD>
<TITLE>Access rights validated</TITLE>
</HEAD>
<BODY onLoad="document.forms[0].submit()">
<FORM METHOD="POST" ACTION="<SP_Assertion_Receiver>">
<INPUT TYPE="HIDDEN" NAME="SAMLResponse" VALUE="<SAML_response>">
<INPUT TYPE="HIDDEN" NAME="RelayState" VALUE="<final_destination>">
<NOSCRIPT>
<CENTER>
<INPUT TYPE="SUBMIT" VALUE="Submit SAMLResponse data"/>
</CENTER>
</NOSCRIPT>
</FORM>
</BODY>
</HTML></pre>The following table describes response segments that depend on IDP and SP configuration:<br>
<style>
table,th,tr,td {
border: 1px solid gray;
}
</style>
<table style="width: 100%;" cellspacing="0" cellpadding="4">
<thead>
<tr>
<th width="150">Segment name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>SP_Assertion_Receiver</td>
<td>Url of an SP endpoint that receives and processes assertions from IDP (Assertion Consumer Service).</td>
</tr>
<tr>
<td>SAML_response</td>
<td>Base64 encoded and zip compressed SAML response with the generated SAML assertion.</td>
</tr>
<tr>
<td>final_desitnation</td>
<td>Final SP destination as specified in hyperlink.</td>
</tr>
</table><br>
<p><strong>Identity federation and SSO</strong><br>
Once Assertion Consumer Service exposed by SP receives an assertion it should establish the federation between identities and perform SSO. The default way of doing this when using OpenAM as the SP consists of following steps:
<ol>
<li>Validate the received assertion i.e. check digital signature and specified assertion conditions.
<li>Extract the persistent identifier from the assertion and search for an SP identity that has this token associated. In our scenario there should be no user found as the federation has not been established yet.</li>
<li>Remember the persistent identifier and RelayState parameter value and redirect the user to SP login page.</li>
<li>When the user provides valid credentials save the remembered persistent identifier against that user in SP data store. </li>
<li>Log that user in using SP`s default authentication mechanism.</li>
<li>Retrieve the remembered RelayState value and redirect the user to that url.</li>
</ol>
<h2>SSO only</h2>
The following sequence diagram presents communication between browser, IDP and SP while performing the SSO for a user that has the identity federation between IdP and SP already established i.e. it is NOT the first time that user requests SP access from IDP.
<center><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSo7ecTYvTJg_RLStkEn5ks3JfX_aJ_sBzAlqEYwK2ObeM9vtuQqIZtE8ZJ8AHqqVRT5FigMeWXJn8RsQjoWDmhXfUQz-lQwtdTNMSBx_oTrcfO9zMoYuNj74VejahlU324JiMadyOjRM/s1600/blog_sso.png" imageanchor="1" style="margin-left:1em; margin-right:1em" title="SSO sequence diagram"><img border="0" height="369" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSo7ecTYvTJg_RLStkEn5ks3JfX_aJ_sBzAlqEYwK2ObeM9vtuQqIZtE8ZJ8AHqqVRT5FigMeWXJn8RsQjoWDmhXfUQz-lQwtdTNMSBx_oTrcfO9zMoYuNj74VejahlU324JiMadyOjRM/s400/blog_sso.png" /></a></center>
<strong>SSO initiation</strong><br>
From the IdP perspective the scenario is almost the same as in previous case. After the user has authenticated and clicked the hyperlink IdP generates assertion and posts it to the SP. The only difference is that this time the IdP doesn’t need to generate the persistent identifier, because it has been generated previously.
<p>The assertion will have the same format as the one send in previous scenario. The content of the assertion will also be the same except the attribute values that rely on current date & time e.g. assertion expiration condition
<p><strong>Assertion processing</strong><br>
In this scenario the Assertion Consumer Service exposed by SP that receives the assertion should ensure that the federation between identities has been previously established and then perform SSO. The default way of doing this when using OpenAM as the SP consists of following steps:
<ol>
<li>Validate the received assertion i.e. check digital signature and specified assertion conditions.</li>
<li>Extract the persistent identifier from the assertion and search for an SP identity that has this token associated. In this use case there will be 1 result returned i.e. the user that has been previously linked to the Idp user.</li>
<li>Log in the found user using SP`s default authentication mechanism.</li>
<li>Retrieve the RelayState value (url) from request and redirect the user to that url.</li>
</ol>
<h2>Idp Initiated logout</h2>
The following sequence diagram presents communication required to perform user logout on IdP and SP.
<center><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhimnHDtAUQYv9f1_MffHIXIXDvZzkQuoN-wjkdq_sbq3r9eUodKaDaJJJu27ulxv6ucUkRyhhKkmWCuUMUcC431pVoZ4DF0iWb0-itNNKU9u42esFpiDSHEHypAPGDSNSGqNOcdDvkmcM/s1600/blog_logout.png" imageanchor="1" style="margin-left:1em; margin-right:1em" title="Single Log Out sequence diagram"><img border="0" height="281" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhimnHDtAUQYv9f1_MffHIXIXDvZzkQuoN-wjkdq_sbq3r9eUodKaDaJJJu27ulxv6ucUkRyhhKkmWCuUMUcC431pVoZ4DF0iWb0-itNNKU9u42esFpiDSHEHypAPGDSNSGqNOcdDvkmcM/s400/blog_logout.png" /></a></center>
<strong>SLO initiation</strong><br>
This use case assumes that identity federation has been previously established between IdP and SP and the user is authenticated with ProviderDashboard (IDP) and IssueReporter (via SSO). On ProviderDashboard site there is a logout hyperlink pointing to <a href="http://fczaja.blogspot.com/2012/06/idp-initiated-sso-and-identity_22.html#IDPSloInit" title="Previous chapter: IDPSloInit service">IDPSloInit service</a> that initiates Single Log Out process.
<p>When the user clicks that logout hyperlink OpenAM generates SAML logout request that is then sent to IssueReporter by HTTP REDIRECT. The Http response returned for the request generated by clicking the logout link will look as follows:<pre class="brush: bash">HTTP/1.1 302 Moved Temporarily
Location: http://<SP_logout_service>?SAMLRequest=<logout_request>&RelayState=<final_destination>
Content-Type: text/html
(...)</pre>
The following table describes response segments that depend on IDP and SP configuration:
<table style="width: 100%;" cellspacing="0" cellpadding="4">
<thead>
<tr>
<th width="150">Segment name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>SP_logout_service</td>
<td>Url of an SP endpoint that receives and processes logout requests.</td>
</tr>
<tr>
<td>SAML_response</td>
<td>Base64 encoded and zip compressed SAML logout request i.e. content of the file attached above.</td>
</tr>
<tr>
<td>final_desitnation</td>
<td>Final SP destination as specified in hyperlink.</td>
</tr>
</table><br>
<p><strong>SP logout</strong><br>
When the browser receives that response it will redirect the user to SP logout service and pass the logout request. The default request processing on SP side looks as follows:
<ol>
<li>Validate the SAML request i.e. check issuer</li>
<li>Extract the persistent identifier from the logout request and search for an SP identity that has this token associated. In this use case there will be 1 result returned </li>
<li>Ensure that the found user is currently logged in using SP authentication mechanism e.g. by checking cookies attached to the request by browser</li>
<li>Logout the user using SP authentication mechanism e.g. delete session and destroy authentication cookies</li>
<li>Generate logout confirmation response</li>
</ol>
<p><strong>Logout confirmation</strong><br>
When the logout confirmation is generated the SP logout service sends it back to IdP using again HTTP REDIRECT:<pre class="style: bash">HTTP/1.1 302 Moved Temporarily
Location: http://<openam_deployment_url>/IDPSloRedirect/metaAlias/idp?SAMLResponse=<logout_response>&RelayState=<final_destination>
Content-Type: text/html
(...)
#Possible headers destroying auth cookies for SP</pre>
The following table describes response segments that depend on IDP and SP configuration:
<table style="width: 100%;" cellspacing="0" cellpadding="4">
<thead>
<tr>
<th width="150">Segment name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>openam_deployment_url</td>
<td>Url of OpenAM deployment used in ProviderDashboard</td>
</tr>
<tr>
<td>logout_response</td>
<td>Base64 encoded and zip compressed SAML logout response.</td>
</tr>
<tr>
<td>final_desitnation</td>
<td>Final SP destination as specified in hyperlink.</td>
</tr>
</table><br>
Once Idp OpenAm receives the logout response generated by SP it performs the following steps:
<ol>
<li>Check if request Id included in response is correct</li>
<li>Ensure the status code is “Success”</li>
<li>Logout currently logged IdP user</li>
<li>Extract RelayState value from response and redirect user to final destination</li>
</ol>
<h2>Idp Initiated federation termination</h2>
The following sequence diagram presents communication between browser, IDP and SP required to terminate the identity federation established previously.
<center><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiizkIWwF794U51EDlFk5kaxIftEhnqnyUc38xnaki3d7acpX88zoO_akm5aeuOyu1vkODXjaUhYu94LRGdFmU9OqBhFBk1aCQ0gyt8fiewJouJO09c_Wg7iqKZHlDjfQ9ErdUc_B7tD84/s1600/blog_break.png" imageanchor="1" style="margin-left:1em; margin-right:1em" title="Identity Federation termination sequence diagram"><img border="0" height="282" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiizkIWwF794U51EDlFk5kaxIftEhnqnyUc38xnaki3d7acpX88zoO_akm5aeuOyu1vkODXjaUhYu94LRGdFmU9OqBhFBk1aCQ0gyt8fiewJouJO09c_Wg7iqKZHlDjfQ9ErdUc_B7tD84/s400/blog_break.png" /></a></center>
This use case assumes that identity federation has been previously established between IdP and SP and the user is authenticated with ProviderDashboard (IDP). The communication between browser, IdP & SP is exactly the same as in SLO usecase described above, with the main difference being the service used i.e. <a href="http://fczaja.blogspot.com/2012/06/idp-initiated-sso-and-identity_22.html#IDPMniInit" title="Previous chapter: IDPMniInit service">IDPMniInit</a>.
<p><strong>SP identity federation termination</strong><br>
On SP side termination consists of following steps:
<ol>
<li>Validate the request i.e. check issuer</li>
<li>Extract the persistent identifier from the request and search for an SP identity that has this token associated. In this use case there will be 1 result returned</li>
<li>Delete the association between that user and the persistent identifier</li>
<li>Generate confirmation response</li>
</ol>
<p><strong>IdP identity federation termination</strong><br>
Once Idp receives the termination confirmation generated by SP it performs the following steps:
<ol>
<li>Check if request Id included in response is correct</li>
<li>Ensure the status code is “Success”</li>
<li>Retrieve the request data and extract the persistent identifier used</li>
<li>Terminate association between IdP user and that identifier</li>
<li>Extract RelayState value from response and redirect user to final destination</li>
</ol>
<h2>Troubleshooting</h2>
All use cases described in this chapter rely on Http protocol and communication between user browser and SAML services. If you run into troubles while configuring SAML with OpenAM or just are get interested how all the SAML messages generated by OpenAM look like I would strongly recommend you to start with <a href="http://developers.sun.com/identity/reference/techart/troubleshooting.html" title="Troubleshooting OpenSSO with Firefox Add-Ons">this tutorial</a>. It describes developer tools that can be used to trace the browser communication, extract messages etc.
<h2>OpenAM Support & consultancy</h2>
I did my best to make this OpenAM tutorial as complete as possible, so you can configure SSO and Identity Federation by yourself. Due to very limited amount of free time I'm not able to respond to your questions in comments or emails. However, if you still need additional explanation or your scenario differs from what I described you can hire me for consultancy (<a href="http://www.myskymap.com/contact" title="Contact form for consultancy">contact me</a>).
<p><strong>Previous chapter:</strong> <a href="http://fczaja.blogspot.com/2012/06/idp-initiated-sso-and-identity_22.html" title="IdP initiated SSO and Identity Federation with OpenAM and SAML - part III">Using OpenAM SAML services</a>Filip Czajahttp://www.blogger.com/profile/12289949072596625867noreply@blogger.com9