Subscribe: No Intelligent Life
Added By: Feedage Forager Feedage Grade B rated
Language: English
add  address  code  crm  dynamics crm  entity  error  file  message  microsoft  new  plugin  service bus  service  string  xml 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: No Intelligent Life

No Intelligent Life

Updated: 2017-12-26T11:50:05.728+00:00


Deploy Word document generation template to new Dynamics 2016 environment.


Microsoft are treating the new document generation facility in Dynamics 2016 as the best thing since sliced bread. There are some genuine limitations with it at the moment which I've seen posted elsewhere.
But I've come across an issue with custom entities. I managed to create a template alright but then tried to install it in another environment. I wasn't trying anything fancy, just a manual install of the template. But when I added the template it bound it to the wrong entity. Now it doesn't offer me the option to choose the entity, just goes ahead and binds it to the wrong thing.
It is to due to the object type code or sometimes called the entity type code. That code is fixed for all the standard entities, but custom entities are given a number in the 10,000 range. What I've discovered is this number is not the same across environments. It's not that I've been stupid and manually created the entity, I deployed a solution as you normally do, but it issues a different type code in the new environment. You can see it easily if you can access the full url for an entity record, you know when you want to find the unique guid. The etc parameter is the entity type code.
Now this wouldn't be a problem except when you create a Word document template it adds the entity type code to the end of the namespace. you can see it when in Word and you are in the Xml mapping pane. It's there at the end of the urn. That then causes it to bind to the wrong entity when deployed into the new environment.
If you spent hours creating the template you don't want to have to repeat this in each environment.
To solve it, take the template and rename the extension from docx to zip. You will see in the extract a customxml folder and in there a file that contains the urn of the source data. You have to change the etc at the end to match the correct one in the target environment. Change any other related custom entities. Zip it back up and rename it back to docx. Load it back into the target environment and it should work. Maybe I'll find the time to write an app to do this for me because I don't see Microsoft fixing this anytime soon. Damn, sometimes Microsoft just don't think things through.

Retrieving the Message from Service Bus Queue


This is a follow up to the previous post where I sent a strongly typed XML message from a CRM plugin to a Service Bus Queue. This is what the Xml looks like that I sent to the queue.Note that the root node is the name of the entity. This is a snippet to show the attributes and then the formatted values.  You can easily manipulate the Xml by changing the way it populates the datatable.       c6fd930f-512a-42fb-a645-af4a672c740f    6a280e36-c009-463b-84f1-8666c2dfc5ba    1    1    1    5475a20e-3c79-479a-a1d5-05b0f144a8fc    1d50bcf0-2519-e611-80da-5065f38b46e1    Charles Emes    13/05/2016 16:15:51;... more attributes;... now the formatted values    Default Value    Default Value    Default Value    13/05/2016 17:15  What I'm missing is a namespace and that is easy enough to add once I load it into an XmlDocument. I'm a BizTalk developer so right now I'm happy with a message that I can transform into anything I want. The attributes are in alphabetical order so I can cope with missing attributes easily enough in the XSD by making Nillable=true and MinOccurs=0. But I can also deserialize this into a contact object by creating a class form the XSD using the XSD.EXE.  Note this is absolutely not a CRM Contact object but my own object. Check out my usings because there is no Microsoft.Xrm.SDK anywhere to be seen.  This receiver is just a console app and I ripped the code from another blogger. I'm just using it show what you can do with the message now you've got it.       static void Main(string[] args) { string connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString"); QueueClient Client = QueueClient.CreateFromConnectionString(connectionString, "anoqueue"); //Console.WriteLine("\nReceiving message from Queue..."); BrokeredMessage message = null; NamespaceManager namespaceManager = NamespaceManager.Create(); while (true) { try { //receive messages from Queue message = Client.Receive(TimeSpan.FromHours(10)); if (message != null) { Console.WriteLine(string.Format("Message received: Id = {0}", message.MessageId)); string s = new StreamReader(message.GetBody(), Encoding.ASCII).ReadToEnd(); // load into an XML Document to s System.Xml.XmlDocument xmldoc = new System.Xml.XmlDocument(); xmldoc.LoadXml(s); // need to add a namespace because it doesn't have one xmldoc.DocumentElement.SetAttribute("xmlns", "http://xrm.generic.schemas"); string mydocpath = @"C:\Projects\Test\ReceiveFromSB\"; xmldoc.Save(mydocpath + "saved.xml"[...]

Dynamics CRM Online: Posting Messages to the Azure Service Bus


I have to say I've never really liked the way Dynamics CRM integrates with the Azure Service Bus. I've posted on this before here and here. What I dislike is the way it posts the Context to the Service Bus.  In order to do anything with the Context I need to use the RemotePluginContext and then extract the entity that I want. That means using the XrmSDK and being familiar with the way to handle the Context. But I like a loosely coupled architecture. I would expect a CRM plugin to place a strongly typed Xml message on the Service Bus. Then processing of the message requires no knowledge of CRM. I'm a BizTalk developer and for interfaces are always about messages that comply with XML schemas because that acts as the contract.  In a development environment where you have different teams of developers with different skill sets this separation I believe is vital. Now you may know that Dynamics CRM only supports the ACS authentication of the Service Bus and you have to create this using PowerShell commands because it is not possible via the portal. See this post. You can find the details of setting up the Service Endpoint elsewhere but the bottom line is that you are using a Guid that points to the Service Endpoint record. So what about deploying to another environment where you will use a different Service Endpoint? Well you have to go through the manual process again. What in a Production environment? Are you kidding me? No, I want to be able to deploy and then just configure the endpoint url. The challenge in doing this online is that the plugin runs in Sandbox mode and it restricts what .Net assemblies I can use, System.Security being one of them. That rules out SAS authentication because it requires on System.Security.Cryptography. My solution is a plugin that does two things. It populates a data table with the attributes from the entity and produces strongly typed Xml.  It then uses WebClient to POST the xml as a string to the Service Bus using the REST API. The authentication uses the ACS token and the only configuration parameters I need are the service bus namespace, the queue name, and the issuer secret. The great thing about the solution is it is totally generic, it works with any entity, all you need is to configure a plugin step. A couple of points to note. 1. I'm using a queue.2. My plugin is registered as synchronous because I want to maintain ordered delivery3. I'm sending to the queue synchronously, because I want to know I successfully sent it.4. I can choose if I want to use the PreImage, PostImage or Target5.The plugin calls my EntitySerialize class6. I am indebted to other bloggers for much of this codeThe next post shows you can retrieve the message form the queue.using System;using System.Collections.Specialized;using System.Data;using System.Linq;using System.Text;using System.Net;using Microsoft.Xrm.Sdk;public class EntityHelper    {        // **************Use bits from  the Service bus namespace below ******************        //        static string ServiceNamespace = "yournamespace";        static string ServiceHttpAddress = "";        const string acsHostName = "";        const string sbHostName = "";        const string issuerName = "owner";        const string issuerSecret = "your_issuer_secret_goes_here";        public static void EntitySerialize(ITracingService tracingService,  IOrganizationService service, Entity entity, string orgName, string msgName, string correlation)        {       &n[...]

Sharing the Host WiFi with Hype-V image


I use Hyper-V for all of my development work.  Up until now I've been providing internet access to my Hyper-V images by setting up a Virtual Switch in Hyper-V, and configuring it as an Internal network.  When my host is connected to the internet, I use Internet Connection Sharing to this Virtual Switch.  I then configure the Hyper-V images to use a legacy adapter and select this Virtual Switch. Here is how my WiFi adapter is shared.

This procedure is well documented in other posts.  But it has a snag. Internet Connection Sharing insists on using IP addresses in the 192.168.0.* range and your WiFi router must use an alternative IP address range e.g. 192.168.1.* 

I recently upgraded my Virgin broadband and the new router was using the IP range 192.168.0.*  You could supposedly change it but I could not get this to work properly so I started to hunt around for another solution that would leave the IP range unchanged.  I stumbled on the wonderful network bridge.

The solution is the go into Virtual Switch settings and create a new Virtual Switch but select External network. I selected my WiFi network card from the list and enabled "Allow management operating system to share this network adapter".

When you click OK, it takes a minute but what its doing is creating a Network Bridge in your Network Connections along with a  Hyper-V Virtual Ethernet adapter you are probably familiar with.  My virtual machines use the same approach, a legacy adapter that is configured for this external Virtual Switch. 

Now when I look at my Network Connections its a bit odd.  My WiFi adapter displays Enabled, Bridged even when I am not connected to the Internet. When I make the connection in the usual way it is the Hyper-V Virtual adapter that goes through the whole Identifying... stage until it connects.  It looks counter-intuitive but it works a treat.  So that's it. My Hyper-V images are connected to the Internet and I didn't have to change the IP address of the router.

Dynamics CRM 2016 and Use Legacy form rendering


So you may know that Microsoft has improved form load by doing some extensive caching.  There is a setting under General Settings called "Use Legacy form rendering" which you can set to Yes or No.

Now you might wonder why I am writing a post about this setting. Well it was a JavaScript  error on a Form Load that I was getting. I had my own JavaScript code on the Form Load  but the error shown was confusing:
Error:Unable to get property '$o_3' of undefined or null reference

Huh? A bit of Googling showed that others had come across it to

The post included this comment "Note that this error only occurs when you turn on the “Use Legacy form rendering”.  Sure enough changing this setting to No avoids the error. 

But what if you want to preserve the faster form rendering?  Patric provide the solution in his blog. 
Add the following line before calling the prefilter function.

Xrm.Page.getControl(“EntityName”)._control && Xrm.Page.getControl(“EntityName”)._control.tryCompleteOnDemandInitialization && Xrm.Page.getControl(“EntityName”)._control.tryCompleteOnDemandInitialization();


Dynamics CRM 2016 and Legacy Entities


This is not the first time I've run into a problem with so called "Legacy Entities". I am referring to the entities that have been left out in the cold since CRM 2013. These include
  • address
  • opportunity product
  • quote product
  • order product
  • invoice product

There are some things that these entities don't support which include
  • Accessible in the Tablet App
  • Update attributes with workflows
  • Business Rules (though oddly Opportunity product does support business rules)
  • Old style forms

I have also come across with problems when upgrading to CRM 2016. This problem occurred on Opportunity Products form which had been customised in CRM 2015. I removed a number of fields I didn't need and it was working fine. 
After the upgrade to 2016 though I was getting this JavaScript error on form load

Error:Unable to get property 'addOnChange' of undefined or null reference

Now that was not from any JavaScript code that I had but from a Microsoft library called OpportunityProduct_main_system_library

When I looked into the code I could see that it was referring to attributes that were not on the form:
Manual discount amount

When I added these back onto the form and hid them, the error went away. 

Moral of the story: Don't delete attributes from legacy forms - just hide them. 

Dependent (Linked) OptionSets in Dynamics CRM 2015


I recently used the sample code provided in the SDK that allows you to have multiple OptionSets linked together so that selecting an item in the first OptionSet will filter the items appearing in the second. was an error in the instructions for configuring the OnLoad event on the form. It says to use"sample_TicketDependentOptionSetConfig.xml" as a parameter.  That's wrong. You need to use  "sample_TicketDependentOptionSetConfig"The SDK.DependentOptionSet.init function in the JavaScript file I modified slightly to support multiple languages.  The first few lines now read//Retrieve the XML Web Resource specified by the parameter passed var clientURL = Xrm.Page.context.getClientUrl(); var userLcid = Xrm.Page.context.getUserLcid(); var pathToWR = clientURL + "/WebResources/" + webResourceName + "_" + userLcid;The data files were then appended with the relevant LCID for each language  e.g. "sample_TicketDependentOptionSetConfig_1033.xml"A very cool and easy to use JavaScript Library. Thanks Microsoft.  [...]

ILMerge Command Line Syntax


I had cause the other day to use ILMerge to combine several DLLs. The first issue I came across was:

Unresolved assembly reference not allowed: System.Core.

This is mentioned in several blogs and the solution suggested was to use the /lib switch and specify the path to the .NET library you want for the target. In my case I wanted to target 4.5.2 so I used the following which supposedly worked (the blog said)

ilmerge  /lib:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5.2"   /out:MERGED.First.dll  First.dll Second.dll  /keyfile:key.snk

Note that if you strongly signed your First DLL don't assume the output will be strongly signed because it won't be. You have to add the /keyfile option to do that. 

Note also there are blogs that say don't try and point to C:\Windows... because that is not the correct location.  You need to use the Reference Assemblies path given above.

Well I didn't get the error and ILMerge started off and I waited. And waited. And waited.  So basically ILMerge just hangs still consuming by the way 50% CPU just to fool you into thinking its actually doing something. 

Its the wrong syntax. The correct syntax uses /targetplatform not /lib

ilmerge  /targetplatform:v4,"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5.2"   /out:MERGED.First.dll  First.dll Second.dll /keyfile:key.snk

Creating a SharePoint OnLine Folder using CSOM


Surprisingly I can't find a complete solution to this on the blogosphere. So let me put that right.First add two references to Microsoft.SharePoint.Client.dllMicrosoft.SharePoint.Client.RunTime.dllI found them on my 64-bit server located hereC:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\ISAPIIn your code add a using statementusing Microsoft.SharePoint.Client;Then in your method add the followingusing (ClientContext ctx = new ClientContext(site)){var securePassword = new SecureString();foreach (char c in password){     securePassword.AppendChar(c); } ctx.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(username, securePassword);var list = ctx.Web.Lists.GetByTitle(doclib);var folderRoot = list.RootFolder;ctx.Load(folderRoot); ctx.ExecuteQuery();folderRoot = folderRoot.Folders.Add(folder); ctx.ExecuteQuery(); }; [...]

Serialize a CRM Entity to XML in Sandbox mode Plugin


I have been struggling with this problem for a while. Previous posts have outlined the problem: how to get changes in CRM sent to Back Office systems as XML messages. When using CRM Online a big frustration is that you cannot serialize objects to XML because the WriteObject method of the serializer throws a security exception. While you can post the Context of a plugin to the Azure Service Bus, you have to use the CRM SDK to interpret this and turn it into something useful. My previous post outlines how to do this with an Azure Worker Process. (You could construct the XML message line by line using string builder but seriously who wants to do that?)I found a simple solution which may solve the problem. It was this post that provided the inspiration. It describes how to turn FetchXML results into a data table.  Now FetchXML returns an entity collection and the code shows how to load that into a data table and it also handles the special CRM data types like Entity Reference.  So there you are in a Sandbox plugin.  You have the Post-Image entity and you want to convert that into a nicely structured XML file that you can post to the Azure Service Bus.  You can use part of the code above (ignoring the FetchXML query) to convert the Post-Image to a data table.  I put it into a function I called ConvertEntityToDataTable.Now getting XML is easy.DataSet ds = new DataSet("Invoice");DataTable dt = new DataTable("Attributes");ConvertEntityToDataTable(dt, entity);ds.Tables.Add(dt);string xml = ds.GetXml();The result looks like this (I've just given an extract)  London 12 Hgh Street Clapham Common Clapham Remember that PostImages provide their attributes in alphabetical order which is great for mapping structured XML messages. The Target entity does not so you would need to address that.The XML is missing some things I would need to add:  (a) the xml declaration at the top specifying the encoding  (b) a namespace to identify the message to an ESBBut I am delighted with the result because now I can construct proper XML messages in a Sandbox plugin.  The two functions required are shown below///////         private void ConvertEntityToDataTable(DataTable dataTable, Entity entity)        {            DataRow row = dataTable.NewRow();            foreach (var attribute in entity.Attributes)            {                if (!dataTable.Columns.Contains(attribute.Key))                {                    dataTable.Columns.Add(attribute.Key);                }                row[attribute.Key] = getAttributeValue(attribute.Value).ToString();            }            foreach (var fv in entity.FormattedValues)            {              [...]

Dynamics CRM 2015 Online and Azure Service Bus Messages


If you've configured Dynamics CRM 2015 Online with Azure Service Bus you will know that the "message" it puts on the queue is the plugin execution context. This is the same context that you use within a plugin.  You will know that to do anything with the context you have to use the Microsoft.Xrm.Sdk and extract the Pre or Post Image and the so called Target image. "Target" is s stupid name I always think. It contains the delta - the attributes that were changed during the operation.  The Post Image of say an Update event will contain all the attributes that have values in it and excludes any attributes with null values. 

The reason for using the Azure Service Bus is usually to get data from CRM to your on-premise systems and more often than not you may be using an ESB to read messages from the Azure Service Bus.  BizTalk for example has an adaptor that you just need to configure to read messages from Azure Service Bus.  However any BizTalk developer is going to be very disappointed if you provide them with just a Plugin Execution Context because they will have to use the CRM SDK to convert it into an XML message.  Even then the entity objects when serialized will result in a collection of KeyValuePairs.  The biggest problem with that is trying to map it to a strongly typed schema particularly because the structure varies so much when attributes that are null are set to a value and vice versa.  I've tried using the BizTalk mapper and failed.

UPDATE: CRM Online restricts plugins to Sandbox mode and you cannot serialize objects to XML because this is prohibited. I have recently stumbled on a way of getting round this. This may provide a much simpler solution then described in the rest of this article.

One solution is to have custom code that will read the context, convert it to a strongly typed schema and then put it back in another queue.  This needs to be done serially to ensure that Ordered Delivery is maintained but at least it gets to a schema that is worthy of the name.

Now you have to be careful when using CRM Online because if your plugins consume a lot of CPU they will be terminated with extreme prejudice.  It is best to keep the code within your plugin as minimal as possible and then have the bulk of your code execute on an Azure VM where you don't need to worry about CPU usage because you can size the VM to suit.

The best way to architect this is
a) Have a Plugin that sends the Plugin Execution Context to a Azure Service Bus Queue
b) Create an Azure Worker Process that reads messages from the Queue, creates an XML message and puts it into another Azure Service Bus Queue (be sure to maintain the order)
c) If using BizTalk create a Receive Port that will read XML messages from the second Queue

The Azure Worker Process is a lot like a Windows Service (in Azure its called a Cloud Service) and will automatically restart if it is stopped.  When you deploy the Azure Worker Process it will create its own VM.  You can supply configuration settings which in our case would include the EndPoint of the Queue we are reading from and the EndPoint of the Queue we are writing to.

While you messages should be placed on the first queue using a plugin running synchronously, the Azure Worker Process is running asynchronously.  If you want to post messages back to CRM because of a failure then you need to setup another asynchronous process to send the messages back to CRM. 

Dynamics CRM 2015 Online and Azure Service Bus


In the previous post I talked about sending messages to another system via a message queue.  This is the design pattern Microsoft recommends when you want to use Dynamics CRM Online to update systems that are on-premise. They recommend using Azure Service Bus and the integration is built into the Online version by creating a Service Endpoint. This is also available for the on-premise version.The plugin registration tool for Dynamics CRM  offers the ability to register a Service EndPoint when you are connected to CRM Online.  What is not clear though is that there are two ways you can configure it.First though there are two gotchas when setting up the Azure Service Bus.GOTCHA #1: You set up the Azure Service Bus Namespace using the Portal.  Wrong. Doing it that way you no longer have the option to use ACS authentication and that is what the Plugin Registration tool uses.  Delete it. Download the PowerShell Azure Commands add-in and run this command:New-AzureSBNamespace -Name yourservice -Location "West Europe" -CreateACSNamespace $true -NamespaceType MessagingThe response you get returned isYou need to use the DefaultKey when the Plugin Registration tool prompts you for the Management Key!GOTCHA #2: You create the Queue (or whatever) using the Portal or PowerShell. Wrong.You need to leave this for the Plugin Registration tool to do.I won't give the rest of the details for configuring the endpoint because that is covered in other blogs.Once you have the Service EndPoint registered there are two ways forward.The first and seemingly the most attractive option is you can register steps and images right there under the endpoint just as you would do with a plugin.  The advantage is that this is a zero code solution. Just by configuring an Entity with the appropriate step and image you can get messages in your queue (or whatever). The thing is though is this method only supports Asynchronous operations.  That may be fine if you have a very simple CRM solution and want to configure only one or two entities. In more real world scenarios this is not going to work for you because it won't guarantee ordered delivery.  That is what I covered in my previous post.  To maintain Ordered Delivery you must use synchronous plugins steps.The second route is to create an Azure-aware plugin. There is sample code in the SDK for doing this and out there in blogosphere.  In this case you just create the service endpoint and copy the Id that it creates. Create your Azure-aware plugin and paste the Id into the Unsecure Configuration section.  Register your plugin steps and images as usual. The plugin uses an instance of the IServiceEndpointNotificationService and essentially posts the context (using the Execute method) to the Service Bus endpoint.  The point here though is that you have full choice over how to register your steps, so if you need Ordered Delivery you can choose Synchronous. Personally I find the whole method of configuring a Service EndPoint sucks. What about when I want to deploy this to other environments?  I am going to have to repeat the manual steps for each environment and when I deploy my Azure aware plugin I am going to have to amend the Id each time. Now you might argue this will be a one off process and its no big deal.  But I prefer my deployments not to involve manual steps so I'm inclined to post messages to the Azure Service Bus using code and have the connection string stored in a configuration entity along with other environment settings.  Remember though that you have to use the REST API to post messages because the plugin runs in Sandbox mode.  [...]

Dynamics CRM, Plugins, Ordered Delivery and Queues


This post applies to both Dynamics CRM Online and On-Premise. The scenario is where you need to keep another system synchronized with changes to Dynamics CRM entities.  To make this loosely coupled you can write messages to a queue. If your target system is unavailable, the queue can store messages until it comes back online. The same design pattern is recommended for Dynamics CRM Online where messages are written to Azure Service Bus queue. You then have a process on-premise (it my be an ESB) that reads messages from the queue and sends then on to the target system. This post stems from work I did on a previous project where we used CRM on-premise to write messages to MSMQ.  If you are reading this far I assume you already know about Ordered Delivery but here is the bottom line:If you want to maintain Ordered Delivery you must use Synchronous Plugins.If you use a plugin registered for asynchronous it may appear to give you Ordered Delivery 4 out of 5 times, but you cannot guarantee it for all messages.You can spend the time proving it for yourself or read this explanation. We had a custom entity for address that meant you could create an address that was the primary address or the regulatory address or both. The business rule was that you could only have one active address for primary and regulatory. To achieve this we created a plugin on that fires on Create of an address and as a Pre-Operation, if you set the both primary and regulatory flags on the address to true it checks if any existing addresses are primary or regulatory, sets them to false and then deactivates the address(es). Now the target system has to obey the same logic so we need to send any messages to it n the correct order, i.e. in ordered delivery.So I created a plugin that was generic and would write a message out to MSMQ. I registered it to run as a Post Operation on Create and Update of an Address and set it to run Asynchronously. In one test case we have two existing addresses one set as primary, the other set as regulatory  (lets call them 'Primary'  and 'Regulatory'),  We create a new address ('New') and set it to both primary and regulatory.  That creates 5 messages. 1. Update of Primary to set the primary flag to false2. Update of Primary when status is set to deactivate3. Update of Regulatory to set the regulatory flag to false4. Update of Regulatory when status is set to deactivate5. Create of the New addressNow you want to maintain the order that the addresses were written to the database. The Create must come last or you've broken the business rule about only have one active primary or regulatory address. With the on-premise CRM I could examine the Asynchronous table and could see the 5 messages there. They were flagged as belonging to the same transaction but when you looked at the processed time they were all identical.  All five records are executed simultaneously and its a matter of chance which message gets in the queue first.  There is an order, but its not consistent. BizTalk works in a similar way to the CRM Asynchronous Service and its architected that way for performance reasons.  When I changed the plugin to work synchronously then it does maintain the correct order of the messages.  You do need to pay attention though to the Rank when you have multiple plugins registered on the same entity for the same stage. By default, Rank is zero but you can put any integer up to 99 into it, and this will set the order that the plugins fire in.  I wanted my message to be the last plugin to execute so I set it to 99.  Remember though that it affects the order of the plugins within the same stage. The plugin pipeline always executes as 1. Pre-[...]

Calling a WCF Web Service over HTTPS (SSL)


I was recently trying to access a web service that I wanted to secure over HTTPS. I got it working as an HTTP service, as you do, and made sure that I had a certificate on the server and enabled the https protocol. I was using basic Http binding and here are the changes that need to be made                      That now worked in the browser if I prefixed the url with https://Next step was that I needed to call the web service where I was not able to access a web.config or an app.config.  Without the service reference being available you have to do this in code. First thing is to make sure you have a copy of the interface class accessible in the client.  It doesn't need to be the same name but it does need to specify the operation contract exactly.    [ServiceContract]    public interface IFormDefinition    {        [OperationContract]        [FaultContract(typeof(CRMSoapFault))]        void PublishFormMetaData(string crmEndPoint, string formId, string webResource, string token);    }    [DataContract]    public class CRMSoapFault    {        public CRMSoapFault(string errorMsg)        {            this.ErrorMsg = errorMsg;        }        ///         /// This property is used to pass the custom error information         /// from service to client.        ///         [DataMember]        public string ErrorMsg { get; set; }    }To call the web service and set the binding information through code to match this you need to add: BasicHttpBinding myBinding = new BasicHttpBinding();            myBinding.Security.Mode = BasicHttpSecurityMode.Transport;            myBinding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None;            EndpointAddress myEndpoint = new EndpointAddress(endPointUrl);            ChannelFactory myChannelFactory = new ChannelFactory(myBinding, myEndpoint);            try            {                IFormDefinition wcfClient1 = myChannelFactory.CreateChannel();                // call the web service method                wcfClient1.PublishFormMetaData(crmEndPoint, formId, webResource, token);            }            catch(FaultException faultEx)            {             }[...]

Accessing SharePoint Online with Web Client


Misleading title really because if you try and access SharePoint Online using the WebClient it will fail to authenticate.  What you need to do is use the CookieContainer and SharePointOnlineCredentials.  I got the basics of this from this post. and also from this post which uses a class inherits from WebClient. It mentions that you need the SharePoint Client Components SDK which will install Microsoft.SharePoint.Client.DLL and Microsoft.SharePoint.Client.RunTime.DLL. Add references to both DLLs in your project and add these two using statementsusing Microsoft.SharePoint.Client;using System.Security; Add this class to your project  public class ClaimsWebClient : WebClient{ private CookieContainer cookieContainer; public ClaimsWebClient(Uri host, string userName, string password) { cookieContainer = GetAuthCookies(host, userName, password); }  protected override WebRequest GetWebRequest(Uri address){     WebRequest request = base.GetWebRequest(address);    if (request is HttpWebRequest)     {         (request as HttpWebRequest).CookieContainer = cookieContainer;   }     return request;  }  private static CookieContainer GetAuthCookies(Uri webUri, string userName, string password){    var securePassword = new SecureString();     foreach (var c in password) { securePassword.AppendChar(c); }    var credentials = new SharePointOnlineCredentials(userName, securePassword);    var authCookie = credentials.GetAuthenticationCookie(webUri);     var cookieContainer = new CookieContainer();    cookieContainer.SetCookies(webUri, authCookie);      return cookieContainer;} } Then call the ClaimWebClient class in the same way as you would the WebClient.  Note that you do not need to set the credentials because it is done within the ClaimWebClient class. ClaimsWebClient wc = new ClaimsWebClient(new Uri(sharePointSiteUrl), userName, password); byte[] response = wc.DownloadData(sourceUrl); [...]

Raising SoapFaults on a Web Service


I have used SoapFaults (FaultExceptions) on Web Services before so here is  quick recap. In your Interface class add this declaration[DataContract]public class SPSoapFault {    public SPSoapFault(string errorMsg)    {        this.ErrorMsg = errorMsg;    }    [DataMember]     public string ErrorMsg { get; set; }} Beneath the OperationContract declaration of the method you want to use this on addthe FaultContract attribute [OperationContract] [FaultContract(typeof(SPSoapFault))]Now in the service you can throw FaultExceptions of this type throw new FaultException(new SPSoapFault("The byte array is null"), new FaultReason("Required parameter"));Note that you must add a Fault Reason.When calling this from a client application make sure you declare the faultexception that is in the web service references. SPService.SPSoapFault spfault = new SPService.SPSoapFault();the catch block of your try catch should include this catch (FaultException faultEx) {   spfault = faultEx.Detail;   string error = spfault.ErrorMsg;  string reason = faultEx.Reason.ToString(); }[...]

Dynamics CRM 2013 Microsoft Web Page Dialog error


If you use Dynamics CRM 2013 for any length of time then you will have noticed the Microsoft dialog box that pops up intermittently.  It seems to occur randomly and is hard to reproduce.  As far as I can see it doesn't result in any data loss so I just ignore it.

But the screen is irritating and doesn't give a good impression to first time users.

It can be disabled by logging on to CRM as the System Administrator and going to Administration and Privacy Preferences. On the Privacy Preferences dialog click the Error Reporting tab. Select "Specify the Web application error notification preferences on behalf of users" checkbox and then select the radio button "Never send an error report to Microsoft".

The errors are still happening but at least the screen isn't popping up. 

Unit Testing Dynamics CRM Plugins


Now there are lots of approaches to unit testing Dynamics CRM plugins and some frameworks for creating Mocks

Recently an XRM Test Framework has been made available

But I like the simple approach(es) outlined in this blog.  It doesn't use any tools because you write the code yourself which can be an advantage or a drawback.  If you don't have time to evaluate a tool and think you can build unit tests quickly, then this is probably a good approach. 

The first approach described in this blog may involve refactoring your code but it is the best approach if you want to run unit tests during an automated build process.  It involves moving most of the code out of the Execute method of the Plug-in and putting it into a "logic" Class Library.  So the unit tests simply call the Class library and you avoid having to go through the executing the plug-in.  OK, it may not test that the attribute filter you included is working properly but at least it tests your code prior to deployment and so would catch any errors.  These approaches, are not mutually exclusive, you would combine them to ensure the quality of the code. 


VS 2012 and TFS 2013 Build Process Templates - How to add DefaultTemplate.11.1.xaml


Today was one of those days that forced me to blog. It was a deeply frustrating day but ended on a high and I thought to myself why hasn't anyone blogged that before?

I have a VS 2012 solution that was held in TFS 2012 and I've now moved it to TFS 2013.  I did not want to upgrade the development team to VS 2013 just at this moment. Besides, I thought VS 2012 and TFS 2013 were compatible, right?  I was trying to set up automated builds so I created a Build Controller on my build server using the TFS 2013 DVD image.

I opened my solution and selected create Build Definition and selected the Default Template (TfvcTemplate.12.xaml). I immediately saw lots of errors and a few minutes Googling revealed that you can't use this TFS 2013 template for VS 2012 projects.

Now the link just below the drop down list of build process templates should direct you to where the templates are stored in Source Code Explorer. Now my link showed #/1/BuildProcessTemplates/TfvcTemplate.12.xaml and that navigated to nowhere. I mean the link doesn't even start with $ so how was that ever going to work?

So I started trying to find the XAML files for the Build Process Template that the Build Controller was using. If they weren't in TFS then presumably they were on the hard drive. No. Then how do I add a Build Process template to the list? Hours of searching revealed nothing and this was the deeply frustrating part of the day.

Then I found this blog. The author very kindly provides the source code to create a Console application that will allow you to list the Build Process Templates you have and crucially to add a new one.  BTW, the two references that you need to add for this to work can be found in the GAC (C:\Windows\assembly\GAC_MSIL)

Now I already had the DefaultTemplate.11.1.xaml file that I needed for my VS 2012 solution because it was sitting there in the BuildProcessTemplates directory beneath the root of my project in TFS. 

I used the command line similar to this:

ManageBuildTemplates.exe http://jpricket-test:8080/tfs/TestCollection0 TestProject add  $/TestProject/BuildProcessTemplates/MyTemplate.xaml

to add my Build Process Template.  As soon as I selected a new Build Definition I could see my newly added template and I was cooking.  A few minutes later I had my first successful automated build.  From the depths of despair to deep joy in just a few minutes. 

Which of course made me think - why hasn't anybody blogged this before? And Microsoft what the hell were you thinking? Why isn't this essential tool included with TFS?  Thanks again Jason Prickett for providing the source.

Automated Builds and Incrementing Version Numbers


Incrementing the version number when you do a build in Visual Studio seems an obvious requirement.I found this great post on using a T4 template.  I followed the instructions carefully by creating a common DLL library project, removing the Class1.cs and adding a file using the code that was provided.  As the author points out you just need to save the T4 template file and it will create a .cs file with the assembly information in it with the appropriate build number.  In my case it created a AssemblyVersion.cs file.As advised, I removed the  AssemblyVersion and AssemblyFileVersion attributes from AssemblyInfo.cs files of all the projects  that I wanted to apply this version number to.  I then added the AssemblyVersion.cs file to the projects as a link but I moved the file under the Properties folder so it appears directly under AssemblyInfo.cs. If you've not added a link before, select Add existing.. and select the file and you will see the Add button in the dialog has a little down arrow which will reveal the "Add as a link" option.  Sure enough when I built the solution, my DLLs had the correct version  number.  But I realised I had to manually save the T4 file each time to refresh the version.cs file.  Now I want to have automated builds so I was looking for a way to process the T4 template before each build.I finally found the answer from this blog entry. There is a TextTransform.exe file that will take the T4 template and produce the cs file.  TextTransform.exe is located here\Program Files\Common Files\Microsoft Shared\TextTemplating\11.0 or \Program Files (x86)\Common Files\Microsoft Shared\TextTemplating\11.0 In the Pre-Build event of the version project I added this"C:\Program Files (x86)\Common Files\Microsoft Shared\TextTemplating\11.0\TextTransform.exe" ($ProjectDir) -out ($ProjectDir)AssemblyVersion.csWhen I build the solution the version number is incremented.  Brilliant. You can of course use lots of different variations for incrementing the build  or revision number. But remember the 4 parts changed the original template to modify the AssemblyFileVersion only and not the AssemblyVersion.  I am still able to distinguish between the DLLs from different builds but the all important AssemblyVersion I control.  The first deployment will be version, I can then branch the code and change the T4 template to the next release increment from the previous revision number using a T4 template there is a good blog post here. If you want to change both build and revision number use methods which use the declaration starting with <#+using System.Reflection;[assembly: AssemblyCompany("C Hoare & Co")][assembly: AssemblyVersion("")][assembly: AssemblyFileVersion("1.0.<#= BuildNumber() #>.<#= RevisionNumber() #>")]<#+private int BuildNumber(){    int buildNumber = (int)(DateTime.UtcNow - new DateTime(2014,7,1)).TotalDays; return buildNumber; }#><#+private int RevisionNumber(){    int revisionNumber = 0 ;     // other code to increment revision number    return revisionNumber;}#>A couple of further points.  1. Make sure there is no white space after the final #> or you will get an error.2. If using TFS then add to the pre-build event, get latest version of AssemblyVersion,cs, check[...]

Activation error occurred while trying to get instance of type LogWriter, key ""


I have blogged about this error before and this post has become one of my most viewed. I hope it provides the answer you need and it makes sense. Please feel free to leave a comment!Microsoft.Practices.ServiceLocation.ActivationException : Activation error occurred while trying to get instance of type LogWriter, key "" ----> Microsoft.Practices.Unity.ResolutionFailedException : Resolution of the dependency failed, type = "Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter", name = "(none)".Exception occurred while: while resolving.Exception is: InvalidOperationException - The type LogWriter cannot be constructed. You must configure the container to supply this value. It occurs when using Microsoft.Practices.EnterpriseLibrary for logging errors. There are lots of blogs that reference this error and they mostly say the error is caused by:a) an error in your configuration (often the database connection string)b) not all the DLLs are presentThat was not my situation. I was logging just to the event log and I was certain I had all the DLLs in the bin directory of my web service. The error occurred when I deployed the web service to a new environment.  I spent many hours tracking down the cause. In this case it was because Microsoft.Practices.EnterpriseLibrary was installed on the target environment and the DLLs were registered in the GAC.  Now as you know, .NET will look for the location of DLLs in a specific order and the GAC is at the top of the list.  The problem is, when the Enterprise Library DLLs are installed in the GAC then it is unable to determine the location of the configuration file.  I guess that when they are just present in the bin directory the location of the configuration file is presumed to be "up a level".  Removing the DLLs from the GAC was not an option, I needed to find a way that would work with the DLLs in the bin directory or in the GAC. I have a DLL that I use to simplify the calls when logging an error.  It has several constructors to create log entries of different severities.  Below is the code that includes just the critical error constructor.  using Microsoft.Practices.EnterpriseLibrary.Logging;namespace Ciber.Common.Logging{ public static class Logging    {       public static void LogCritical(string message, string title, string category)        {            Logger.Write(message, category, 3, 1, TraceEventType.Critical, title);        }     }}Now I have many places in my code where I reference the LogCritical method and I didn't want to change them. What I needed was a way to link up to the loggingConfiguration section in my web.config.Firstly I added two additional references and using statementsusing Microsoft.Practices.EnterpriseLibrary.Common.Configuration;using Microsoft.Practices.EnterpriseLibrary.Logging.Configuration;Then I added this method to my class:public static LogWriter CreateLogger(){   EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(ReadConfigSource());   var logWriter = EnterpriseLibraryContainer.Current.GetInstance();   return logWriter;}What that allows is to create a default container from an XML location that I specify and which in my case contains the loggingConfiguration section.  [...]

BizTalk Rules Engine - Lessons Learned


If you are looking for a few hours to while away then the BizTalk Rules Engine will certainly occupy the time.  Hopefully this post might give you some hours back. This post is about using the BizTalk Rules Engine within an orchestration.  In my case I want to set Boolean values that can then be used in decision shapes within the orchestration.  The sort of thing would be "if IsCreateCase is true then do this, else do that".  A quick summary of the things I learnt is:Write the results of the BRE rule back into the messageKeep your rules really simpleFind a workaround for null nodesPut the Call Rules shape in a scope and add an exception handlerThe easiest thing is to write the BRE rule results back into the message.  When you set up the Call Rules shape it asks for only two things, the name of the BRE policy and the input parameters. In my case the input is an XML message and if you create an Action to write back to a node in the message, it will actually clone the message.  For me it was a benefit to have these results within the message as it provided a means of checking the logic was correct. I created some xs:boolean elements in my message and set the default value to false.  That is key because the target element must exist within the message.  Now all I had to think about was writing the conditions that will evaluate to true.During testing of the policy in the Business Rules Composer I soon realised that it is best to keep the rules really simple if you can.  That is because when testing it displays the name of the rule that was fired so it is easier to check if these are simple rules.  My next problem was around the message instances.  I am comparing the value of two nodes (both called CreateCase) in the message but in some valid message instances one of the nodes can be completely absent.In code you would of course check for the existence of the Before/CreateCase node before checking its value because otherwise you would get en exception with the first example message.  I thought that BRE would do this if I used a condition which checked for its existence, but I just could not get that to work.  No combination of logical AND and OR would do the trick.  In frustration I gave up and used the map which transforms the external schema to my internal one to add the necessary elements as nulls. I blogged about creating a null node here. I used the logical existence operator ? with the logical not ! along with the Value Mapping functoid.  Having a null element for CreateCase made all the difference and simplified my rules even more.  Having tested my rules thoroughly in the Business Rules Composer I published and deployed it to BizTalk.  In the orchestration I added the Call Rules shape within a scope and created an Exception Handler that trapped errors of type PolicyExecutionException (you need to add a reference to Microsoft.Rule.Engine.DLL).In the end I am very pleased with the result.  One simple shape determines the logic that my message will follow through the orchestration.  If I have to amend the rules I can do so without redeploying the orchestration. Sweet.  [...]

BizTalk Deployment Framework 5.5 Errors


I was working with the BizTalk Deployment Framework version 5.5 and I kept hitting an error when deploying. The error arose with the xml pre-processor step as it tries to create the port bindings file form the master and the environment settings file.  The error was

Invalid settings file path (OK on server undeploy)

I traced it to the SettingsFilePath parameter being blank which is created during the MSBUILD process. 

It took me a while to realise it was because the Install Wizard with 5.5 does not prompt for the location of the Settings File as it did in previous versions. It now prompts for the account name used for configuring FILE Send and Receive Ports because it will now automatically create the file locations for you and set up permissions (hooray).

To solve the "Invalid settings file path" problem I had to edit the InstallWizard.XML file and add a new SetEnvUIConfigItem section to prompt for the settings file.  You can find the XML in the BTDF documentation.  After that, my deployment worked without error.

Another point is the BTDFPROF sample file has changed and it specifies several PropertyGroup sections where the name of the BizTalkHosts are specified.  I found this didn't work so I went back to adding the BizTalkHosts element in the ItemGroup section and included the host instance names  I wanted to bounce. 

Unit Tests and Microsoft Practices Logging


I created a WCF Web Service where I am logging errors using the Microsoft.Practices.EnterpriseLibrary tools for handling exceptions and logging.
When I called the web service using the WCF Test Client, all was well and I recorded the errors successfully.

But when I added some unit tests that would also generate an exception when calling the web service, I received the following error:

Microsoft.Practices.ServiceLocation.ActivationException : Activation error occured while trying to get instance of type LogWriter, key ""
 ----> Microsoft.Practices.Unity.ResolutionFailedException : Resolution of the dependency failed, type = "Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter", name = "(none)".
Exception occurred while: while resolving.
Exception is: InvalidOperationException - The type LogWriter cannot be constructed. You must configure the container to supply this value.

I spent some time Googling and finally found this post which explains all.
You need to copy the relevant sections of the web.config that refer to Enterprise Library into the app.config of the Unit Test Project. 

This also applies if you have an AppSettings entry (e.g. a Dynamics CRM Connection string) as this also needs to be copied into the app.config.


SharePoint 2013 - Word has encountered a problem trying to open the file


I experienced this error "Word has encountered a problem trying to open the file" when trying to open a Word template (dotx) from SharePoint 2013.  I should explain that I was using Word 2010 and could open the template from its location in the file system, but when I uploaded it as a content type into SharePoint 2013 I received this error when trying to use the template.

Some blogs suggested the following
a) install Office 2010 32bit instead of Office 2010 64bit
b) upgrade to SP2
c) remove the SharePoint Foundation Services as a component of Office, and then do a repair.

I tried all of that and yet it still would not open the template.  Then I found a blog that advised switching off Protected mode - go to File, Options, and Trust Centre Settings and disable all the Protected options which are enabled by default.  That did the trick.

My advice - try switching Protected mode off first before trying anything else.  Now I come to think of it I had seen this before and had forgotten about it.  Hence the blog.