Subscribe: Nick Randolph's .NET Travels
Added By: Feedage Forager Feedage Grade B rated
Language: English
access  active directory  active  application  azure active  azure  buildit  code  directory  https  sign  token  user   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Nick Randolph's .NET Travels

Nick Randolph's .NET Travels

Continually looking for the yellow brick road so I can catch me a wizard....Join me as I explore the world of cool devices, neat technology and uber geeks. I spend most of my time talking about Microsoft related technologies such as the .NET Framework, SQ


Thinking about Synchronisation in a Cloud-First World

Sun, 09 Apr 2017 17:04:00 +0600

As a mobile applications enthusiast from way back before Windows Mobile was a thing (yeh, I mean the first time), one of the challenges has always been data synchronisation. The challenge comes down to a trade off between stale data and user experience. Simple applications don’t worry about any form of data synchronisation; instead relying on pulling whatever data they need, when they need it. Unfortunately, these applications feel like the user is always waiting on data, or worse, seeing no data or error messages when there’s no connection. In order to fix, what is essentially a user experience issue, application developers often look at caching, or synchronising, data so that it can be made available offline. This both improves performance, since data is read from what’s cached on the device, it also means the data can be access when offline. The inevitable question is then how much data should be cached, and how should data be synchronised. There is no golden-bullet when it comes to identifying what data needs to be cached on the device. Some applications only cache data that the user has chosen to look at; allowing them to come back and review the data at a later point without having to request the data again. Other applications will proactively cache all data related to the current user – whilst this seems like a good idea initially, as the data related to a user increases, so does the time taken to complete an initial, or even future, synchronisations. In addition, the logic to retrieve all data related to a user can grow in complexity, often resulting in more data than necessary being retrieved, to ensure no data is omitted. Data synchronisation used to be an important topic with several attempts being made by Microsoft to assist developers. For example there was a Windows Mobile client for Merge Replication; there was the Microsoft Sync Framework and more recently the Mobile App Service has a limited form of data synchronization. Unfortunately none of these methods are well supported. Nor are they optimised to take advantage of some the benefits of the cloud. Let’s look at the most recent attempt by Microsoft to provide a synchronisation framework for mobile apps – Offline Data Sync for Azure Mobile Apps. The basic premise is that the application defines a table, or a query on a table, that will be pulled into a local Sqlite database. Local changes can be made to the data. Data can then be synchronised by pulling any server side changes and pushing local changes to the server. The architecture has the mobile application connecting to a service, which then either retrieves data from the database, or submits changes to the database. Thinking about how a cloud application should scale, there are a number of issues with this architecture: If you look at how to scale cloud applications, having any services which connect directly to the database, can be a source for bottlenecks. I’m definitely not advocating for no relational database, just a bit of separation between the service tier and the database, such that most service calls don’t block waiting for the database to be available. Connecting directly to services is a very slow way to retrieve data – it involves querying the database, processing the data into the format to be returned, and then returning it directly from the service instance. Compare this to retrieving the same data, pre-fetched and available via a CDN. The latter is going to be significantly faster, will cut down on bandwidth costs, will reduce load on both services and database since they no longer have to do work for every request for data. Changes submitted to the services have to be applied directly to the database – if the database is offline, or otherwise unavailable, the service call will fail. Even if the data can be written immediately, there is a latency involved which increases the execution time of the service. This means more scaling of the service and slower responses back to the mobile application. Reading betwe[...]

Screenshots for a UWP Xbox App

Thu, 23 Feb 2017 08:55:57 +0600

We’ve been doing a bit of work building out an Xbox app and part of the fit-and-finish of any app is aligning the application layout with those of the designs. One of the easiest ways to do this is to take a screenshot of the app and then mark it up alongside the original designs, which are hopefully high enough fidelity to allow you to do the final round of tweaks against. On Xbox we noticed that within a UWP application the normal double-tap of the Xbox button doesn’t seem to take a screenshot. This is was confirmed by a post on the forum ( where the recommendation was to write code to take a screenshot. I even started down the path of using similar code to extract a screenshot, when I remembered there was a developer portal that’s exposed when you switch the device into dev mode (see I figured that once I took a screenshot using code, I’d need a way to get the image off the console, and that the developer portal was the way to do this. The first thing to do is to go to Dev Home – there should be a link on the right of the home screen when your console is in developer mode. If you don’t see it but you’ve setup dev mode, it may be that your device has reverted to retail mode, in which case you’ll need to run the dev mode activation app again. In Dev Home, there is an option to “Manage Xbox Device Portal” Where you have one option, “Enable the Device Portal” Once enabled you’ll need to configure authorisation by specifying username and password – write these down as you’ll need them shortly to access the dev portal. The next step is to go to a computer that’s on the same network as the Xbox (or at least a network that can reach the Xbox). Enter the IP address with the corresponding port number (see previous screenshot which is shown after enabling device portal, which shows the IP address and port number of your console). In most browser (this was Edge, but similar behaviour happens on most modern browsers) you’ll see a warning that the site you’re attempting to access can’t be verified. You need to expand, in this case, Details in order to see a link that will allow you to continue on to the device portal. Now that you’re in the device portal, select the Media capture node on the left. Click the “Capture Screenshot” at any point to screenshot whatever’s currently being displayed on the Xbox. You’ll also be prompted to open or save the captured image. [...]

Why String Interpolation is a BAD thing

Sun, 19 Feb 2017 06:04:22 +0600

So I’ll set the scene – I was reviewing some code and refactoring it. First thing to do was to go through and take a look at the Resharper comments as they do a good job at tightening up sloppy code. In this instance I came across code that looked a bit like this:

var url = “”;
url += queryParameters;

Of course, this can easily be refactored using string interpolation to

var url = $“{queryParameters}”;

Which looks so much cleaner. Unfortunately this has actually just made my code even worse – I still have this horrible nasty string literal, now with embedded code. Argh, who does this stuff. Next step, factor the string out into a constants, or ideally configuration file.

private const string ServiceUrlTemplate = “{0}”;

and then:

var url = string.Format(Constants.ServiceUrlTemplate,queryParameters);

I’m sure there are a ton of other ways to make this nicer and perhaps more readable but having a string literal with interpolation is not one of them.

Additional Note: If you’re not using Resharper or another equivalent tool, you’re not working effectively. Start using a refactoring tool today and don’t ignore the warnings. Spend time everyday reviewing and improving the code you write


Call out to the ADAL team! – Authenticate Using External Browser

Sat, 18 Feb 2017 03:33:45 +0600

In my post, Authorizing Access to Resources using Azure Active Directory, I talk about authenticating using the built in browser on the device, rather than authenticating via a webview, which is all too common. Unfortunately despite being fully supported by Azure Active Directory, the team responsible for ADAL haven’t, as far as I can tell, provided support for using an external browser to authenticate.

I was super impressed when I just downloaded the Facebook app on Windows, that it supports “Log in with Browser”.


In my opinion, this not only represents a more secure form of authentication (since I can validate the website I’m signing into), it is also a better experience, since I’m already logged into Facebook in the browser anyhow.

I definitely encourage developers to consider using the external browser, rather than supporting SDKs and libraries that us the in-app browser.


Unable to Connect or Debug to Visual Studio Android Emulator with Visual Studio 2017 RC

Sun, 29 Jan 2017 11:12:42 +0600

Now I do appreciate that running prerelease software comes with some risk and I’m also aware that emulators are hard to get working 100% right on every machine. Ever since I can remember there have been connectivity issues with Windows Mobile, Windows Phone and now Windows 10 Mobile emulators; whether connectivity meaning to the internet, the local machine or being able to debug an application. So, it came as no surprise that after rebuilding my computer and installing Visual Studio 2017 RC that my installation of the Visual Studio Android Emulator was semi-broken. Turns out I had two issues I needed to overcome. When I attempted to launch the emulator, I got the following notice, saying that the Internet Connection needs to be configured – this is pretty typical for first run as Hyper-V needs to setup the virtual switches that the emulator image will use. After clicking Yes, the emulator is launched and my application is deployed. Unfortunately when Visual Studio attempts to launch the application and attach the debugger, the application closes immediately. This is again an issues I’ve seen before and in fact it appears on the troubleshooting web page for the Android Emulator ( - After you’ve run the emulator image the first time, close the Android Emulator - Open Hyper-V Manager - Select the virtual machine that matches the emulator you were attempting to run (make sure it’s in the Off state), and click Settings - Under Processor –> Compatibility –> check the “Migrate to a physical computer with a different processor version” checkbox - Click OK - Important: Make sure you stop and restart the Hyper-V service, otherwise, for some reason the setting is lost the next time you run the emulator. Having done this I can now deploy and run applications on the emulator. The next issue was that for some reason the emulator couldn’t access the internet. I took a look in the Virtual Switch Manager in Hyper-V Manager (click Virtual Switch Manager from the Actions list on the right side of the Hyper-V Manager management console) and there was only a single “Windows Phone Emulator Internal” switch. I clicked on New virtual network switch, selected External Access and gave the switch a name: Each emulator virtual machine needs to have access to both the internal and external switches, so after clicking OK to exit the Virtual Switch Manager, I clicked on the virtual machine that I want to assign the new virtual switch to. Make sure you’ve stopped the virtual machine (closing the Android emulator will do this). Click Settings, click Add Hardware and select Network Adapter and click Add. From the Virtual switch dropdown, select the virtual switch you just created (External Access in my case), and click OK. Now launch the emulator (either via the Visual Studio Android Emulator interface that can be launched from the Start menu independently of Visual Studio, or by attempting to run an application from Visual Studio) and you should now have internet access – check via the web browser in the emulator if you’re in any doubt. [...]

UseWindowsAzureActiveDirectoryBearerAuthentication v’s UseJwtBearerAuthentication for Authorization with Azure Active Directory for an ASP.NET Web API

Sun, 29 Jan 2017 10:36:02 +0600

In my previous post, Securing a Web API using Azure Active Directory and OWIN, I covered how to authorize requests against Azure Active Directory using the UseWindowsAzureActiveDirectoryBearerAuthentication extension method in the OWN startup class. This extension method has been designed specifically for Azure Active Directory but if you think about it, the Authorization token is just a JWT token, so in theory you could take a much more generic approach to authorizing access by validating the JWT. This can be done using the UseJwtBearerAuthentication extension method. There are a couple of steps to using the UseJwtBearerAuthentication extension method. Firstly, in order to validate the signature of the JWT, we’re going to need the public certificate that matches the key identifier contained in the JWT. In my post on Verifying Azure Active Directory JWT Tokens I cover how to examine the JWT using in order to retrieve the kid, retrieve the openid configuration, locate the jwks uri, retrieve the keys and save out the key as a certificate. In the post I used the certificate (ie wrapping the raw key in ---BEGIN---, ---END--- markers) to validate the JWT; in this case I’ve copied the contents into a text file which I’ve named azure.cer and added it to the root of my web project (making sure the build action is set to Content so it is deployed with the website). The next thing to do is to remove the UseWindowsAzureActiveDirectoryBearerAuthentication extension method, replacing it with the following code. var fileName = HostingEnvironment.MapPath("~/") + "azure.cer";var cert = new X509Certificate2(fileName);app.UseJwtBearerAuthentication(new JwtBearerAuthenticationOptions{    AllowedAudiences = new[] {ConfigurationManager.AppSettings["ida:Audience"]},    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]    {        new X509CertificateSecurityTokenProvider(ConfigurationManager.AppSettings["ida:IssuerName"], cert)    }}); This code uses the azure.cer certificate file combined with the Audience and IssuerName which I’ve added to the web.config. The Audience is the application id (aka client id) of the Azure application registration. The IssuerName needs to match to what appears in the JWT. Opening one of the tokens in it’s the ISS value that you want to use as the IssuerName. Now you can run the project and see that again the requests are validated to ensure they’re correctly signed. [...]

Securing a Web API using Azure Active Directory and OWIN

Sat, 28 Jan 2017 13:24:02 +0600

In this post we’re going to look at how to use Azure Active Directory to secure a web api built using ASP.NET (full framework – we’ll come back to .NET Core in a future post). To get started I’m going to create a very vanilla web project using Visual Studio 2017. At this point VS2017 is still in RC and so you’ll get slightly different behaviour than what you’ll get using the Visual Studio 2015 templates. In actual fact the VS2015 templates seem to provide more in the way of out of the box support for OWIN. I ran into issues recently when I hadn’t realised what VS2015 was adding for me behind the scenes, so in this post I’ll endeavour not to assume anything or skip any steps along the way. After creating the project, the first thing I always to is to run it and make sure the project has been correctly created from the template. In the case of a web application, I also take note of the startup url, in this case http://localhost:39063/. However, at this point I also realised that I should do the rest of this post following some semblance of best practice and do everything over SSL. Luckily, recent enhancements to IIS Express makes it simple to configure and support SSL with minimal fuss. In fact, all you need to do is select the web project node and press F4 (note, going to Properties in the shortcut menu brings up the main project properties pane, which is not what you’re after) to bring up the Properties window. At the bottom of the list of properties is the SSL Enabled and SSL URL, which is https://localhost:44331/. Take note of this url as we’ll need it in a minute. To setup the Web API in order to authorize requests, I’m going to create a new application registration in Azure Active Directory. This time I need to select Web app / API from the Application Type dropdown. I’ll give it a Name (that will be shown in the Azure portal and when signing into use this resource) and I’ll enter the SSL address as the Sign-on URL. This URL will also be listed as one of the redirect URIs used during the sign in process. During debugging you can opt to do this over HTTP but I would discourage this as it’s no longer required. After creating the application, take note of the Application Id of the newly created application. This is often referred to as the client id and will be used when authenticating a user for access to the web api. Application Id (aka Client Id): a07aa09e-21b9-4e86-b269-a18903b5fe54 We’re done for the moment with Azure Active Directory, let’s turn to the web application we recently created. The authorization process for in-bound requests involves extracting the Authorization header and processing the bearer token to determine if the calling party should have access to the services. In order to do this for tokens issues by Azure AD I’ll add references to both the Microsoft.Own.Security.ActiveDirectory and Microsoft.Own.Host.SystemWeb packages. Note: Adding these references takes a while! Make sure they’re completely finished before attempting to continue. Depending on the project template, you may, or may not, already have a Startup.cs file in your project. If you don’t, add a new item based on the OWIN Startup class template The code for this class should be kept relatively simple: [assembly: OwinStartup(typeof(SampleWebApp.Startup))]namespace SampleWebApp{    public partial class Startup    {        public void Configuration(IAppBuilder app)        {            ConfigureAuth(app);        }    }} Additionally, you’ll want to add another partial class file Startup.Auth.cs in the App_Start folder. namespace SampleWebApp{    public partial class Startup    {   &nb[...]

Improving the Azure Active Directory Sign-on Experience

Sat, 28 Jan 2017 03:41:46 +0600

I was talking to a customer the other day and had to log into the Azure portal. Normally when I launch the portal I’m already signed in and I’m not prompted but for whatever reason this time I was prompted to authenticate. Doing this in front of the customer lead to three interesting discussions: - Use of two factor authentication to secure sign in- Separate global administrator account for primary organisation tenant- Company branding for Azure AD sign in Firstly, the use of two factor authentication (TFA) is a must requirement for anyone who is using the Azure portal – if you are an administrator of your organisation, please make sure you enforce this requirement for anyone accessing your tenant/directory/subscription. This applies to staff, contractors, guests etc who might be using your Azure portal or the Office 365 portal. In fact, in this day in age, I would be enforcing two factor authentication for all employees – note that Outlook and Skype for Business are still stuck in the dark-ages and don’t access TFA sign in. For these you’ll need to generate an application password (go to, click on your profile image in top right corner and select “Profile”, click through to “Additional security verification,” click on the “app passwords” tab and then click “Create” to generate an app password. Ok, next is the use of a separate global administrator account – this is in part tied to the previous point about using TFA. If you’re a global administrator of your tenant and you enable TFA, you won’t be able to generate app passwords. This is essentially forcing you down the path of best practice, which is to have a separate account which is the global administrator for your tenant. If other people in your organisation need administrative permissions, you can do this on a user or role basis within the Azure portal – our preference is to assign permissions to a resource group but there is enough fidelity within the portal to control access at the level you desire. The other thing we’ve also enforced is that we do not host any Azure resources in our primary tenant (ie in our case Given the importance of Office365 based services we felt it important that we isolate off any resources we create in Azure to make sure they’re completely independent of our primary tenant. The only exception to this is if we are building internal LOB applications (ie only apps for Built to Roam use) – for these we include the app registrations within the tenant so that we can restrict sign in and at the same time deliver a great sign in experience for our employees. For example we’re using Facebook Workplace ( – we configured this within the tenant in Azure AD to allow for a SSO experience. Now, onto the point of this post – the last thing that came out of signing into the portal in front of the customer was that they were startled when we went to sign into the portal and our company branding appeared. To illustrate, when you first land on the portal sign in page you see: After entering my email address, the sign in page changes to incorporate the Built to Roam branding This not only improves the perception (for internal and external users), it also gives everyone a sense of confidence that they’re signing into a legitimate Built to Roam service. In order to set this up, you need to navigate to the Active Directory node in the Azure portal and click on the Company branding. If you’re using Office 365 you should already have access to this tab. However, if you’re not, you may need to sign up for Active Directory Premium – you can get started using the Premium trial: Once you’ve opened the Company branding tab (if you have just activated the trial, you may need to wait a few minutes and/or sign out and[...]

The Danger of Admin Consent for Applications

Thu, 26 Jan 2017 06:11:38 +0600

In the last couple of posts I covered the use of admin consent to grant permissions for an application to access more than simply data related to the signed in user: Admin Consent for Permissions in Azure Active Directory Making your Azure Active Directory application Multi-tenanted In my initial post talking about admin consent I added the “Read directory data” to the set of permissions that the application would request. This permission, along with a number of other permissions for the Windows Azure Active Directory resource, have a green tick in the “Requires Admin” column. It’s all to easy for developers, when setting up the permissions for an application to not think through what enabling permissions for an application does. When you enable permissions for an application, by ticking the box alongside the permission in the Azure AD application configuration (see previous) image, you are essentially saying that the application will be able to perform those actions after the user has signed in and provided consent – this is true regardless of what role the user belongs to within the organisation (the application can perform the same actions whether a receptionist or the CEO of an organisation is signed in). The division between permissions that require admin consent, versus those that do not, usually are based on whether the permission is for an action that only administrators should be able to do, versus those that any user should be able to do. For example in the case of Windows Azure Active Directory all users should be able to sign in and read their profile, read the basic profile of all users in the directory and access the directory as the signed-in user – all of these do not require admin consent as they are either actions that pertain to the signed in user, or they are actions that all users should be able to perform. The last permission does permits more access control as elements within the directory can be secured so users/roles/groups can access them; if the user is accessing the directory as themselves, they’ll only be able to access objects that they have permissions on. When I checked the box alongside the “Read directory data” permission, I was essentially saying that all users (assuming a global administrator had performed the admin consent) would be able to read all data in their organisation’s directory (see In order words, once someone has signed into the application, the access token granted to the application can be used to retrieve all directory data – ummm that’s starting to sound pretty scary. No imagine if I granted “Read and write directory data” permissions to the application. Now the access token can read and write directory data. Put this into the context of hackers who use social engineering to access low level employees, you can imagine it wouldn’t be hard for them to access this application in order to access employee information. The moral of this post is DO NOT enable REQUIRES ADMIN permissions to any native application (unless you’re 100% sure you know what the implications are) – it’s way to easy to intercept access tokens and act on behalf of the user. [...]

Making your Azure Active Directory application Multi-tenanted

Thu, 26 Jan 2017 05:13:44 +0600

So far in my previous posts I’ve discussed signing into an application using Azure Active Directory (Azure AD) using a basic application registration in Azure AD. Last post we added some additional permissions that required administrator consent. However, up until now, only users in the same directory (aka tenant) that the application is registered in, can sign in. In the case of the sample application I’ve been working with, the application is registered to the tenant, so only users belonging to that tenant can sign in eg If I attempt to sign in with an account from a different tenant, I run into a few issues, and depending on what type of account I sign in with, the error that is displayed varies. I’ll start by signing in with a regular user account that belongs to a different tenant (in this case When I attempt to sign in with this account, everything seems to go well – I’m prompted to sign in; I successfully sign in; I’m returned to the application where the code is exchanged for an access token. However, when I attempt to use this access token I get a rather cryptic error about an “Unsupported token” eg: {"odata.error":{"code":"Request_BadRequest","message":{"lang":"en","value":"Unsupported token. Unable to initialize the authorization context."},"requestId":"86481ea2-79bd-461b-93ad-4f649286617a","date":"2017-01-25T21:28:14"}} This is actually less cryptic than it seems – essentially it’s saying that I’m attempting to present a token that the API can’t process. If you were to open the access token in, you’d see that the token has been issued by the tenant (actually you’d see the STS url that correlates to this tenant) for an account that doesn’t exist in that tenant (eg Whilst this is a legitimate access token, when you present it to the Graph API, it attempts to retrieve information about the user in the issuing tenant, which of course fails, since the user doesn’t exist there. Ok, let’s see what happens if I attempt to launch the admin consent prompt. In this case, after I sign in (now using which is a global administrator) I get a more useful error message saying “AADSTS50020: User account … does not exist in tenant”. The reason this error message is useful is that it prompts me to think about what I’m attempting to do – I’ve been prompting the user to sign into the tenant, which is fine if I’m a user that belongs to that tenant but since I’m attempting to use a different user, this is clearly not correct. So, the first step in making my application multi-tenanted is to change the authorization url that I’m directing the user to in order to sign in, to one that is more generic and will allow signing in by users from any tenant. This involves exchanging the “” with “common” in the url – the following code shows how I adapted the code in my sample application to support urls that are single tenanted (ie AuthorizationUrl and AdminConsentUrl) as well as the multi-tenanted equivalent (ie MultiTenantAuthorizationUrl and MultiTenantAdminConsentUrl). private string BaseAuthorizationUrl =>    "{0}/oauth2/authorize?" +    "client_id=40dba662-4c53-4154-a5cf-976473306060&" +    "response_type=code&" +    "redirect_uri=sample://callback&" +    "nonce=1234&" +    "resource="; private string AuthorizationUrl => string.Format(BaseAuthorizationUrl, ""); private string[...]

Admin Consent for Permissions in Azure Active Directory

Tue, 24 Jan 2017 19:08:14 +0600

In the previous posts I’ve discussed authenticating and authorizing a user with Azure Active Directory (Azure AD) using a basic application registration. All application registrations are given default permissions to access the Azure Graph API – this was used in my previous post to retrieve information about the signed in user. The default permission set is a delegated permission that allows the user to sign in and view their own profile. This can be viewed in the Azure portal by extending the Required permissions tab for the application. In this post I’m going to extend this permission set to include the “Read directory data” permission. You’ll notice in the previous image that there is a green tick in the “Requires Admin” column. What this means is that in order for a regular user (ie a user that is not a global administrator for the tenant) to sign in, a global administrator must first sign in and consent to permission on behalf of the organisation. If a regular user attempts to sign in, they’ll be confronted with an error message such as:   Essentially the error “AADSTS90093: Calling principal cannot consent due to lack of permissions” indicates that a global administrator needs to sign in an consent on behalf of the organisation before users can sign in. If a global administrator signs in, they’ll see a prompt where they can consent permissions – this is slightly confusing as it would imply that the administrator is consenting on behalf of the organisation. Unfortunately this is not the case, they’re only consenting for use by their account. In order for a global administrator to consent on behalf of an organisation, so that all users can make use of the “admin consent” permissions, they have to be directed to a new sign in page with the parameter “prompt=admin_consent” set in the query. In other words, the admin consent url is exactly the same as the authorization url, except it has “&prompt=admin_consent” appended to the end. The admin consent prompt looks slightly different to a regular consent prompt as it highlights that consent is going to be assigned for the entire organisation As this is a one-off operation, a global administrator can either navigate to the url in the browser, or the application can have a separate button that would launch the url so that the admin can consent. After the global administrator has consented, user’s will still be prompted to consent but this is for the delegated permission. In the same way that user permissions can be revoked by going to and deleting the application entry, organisation permissions can be revoked by opening the Enterprise applications tab for the Active Directory in the Azure portal. Select the application you want to remove and click the Delete button. After the global administrator has consented for the organisation, any user can then read the directory data (ie more than just their own profile). [...]

Verifying Azure Active Directory JWT Tokens

Tue, 24 Jan 2017 18:02:14 +0600

When working with OAuth and Open ID Connect, there are times when you’ll want to inspect the contents of id, access or refresh tokens. The website is useful as you can drop in the token in the pane on the left, and the site dynamically decodes the header, body and signature for the JWT. Unfortunately by itself the signature on the JWT can’t be verified as the website doesn’t know what key to use to validate the signature. The header of the JWT does provide information about the algorithm used (ie RS256) and the id of the key used but this by itself isn’t enough to locate the key to be used. As RS256 is a public/private key algorithm, there is a private key, which the issuer holds, and a public key which is available to anyone to access. The former is used to generate the signature for a JWT; the later can then be used to validate the signature. To find the public key to use to validate the signature I’ll start with the OpenID Connect configuration document, which is available for any tenant at:{tenantId}/.well-known/openid-configuration eg The returned configuration document contains an attribute, jwks_uri, which points at Loading the jwks_uri returns another JSON document which lists a number of keys. Now we can use the kid from the header of the JWT to identify which key to use, in this case the first key in the list. Attempting to simply copy the x5c value from the list of keys into the Public Key or Certificate box on the website will still not verify the signature of the JWT. In order to verify the signature, wrap the key in BEGIN and END CERTIFICATE markers as follows: -----BEGIN CERTIFICATE-----MIIDBTCCAe2gAwIBAgIQEsuEXXy6BbJCK3bMU6GZ/TANBgkqhkiG9w0BAQsFADAtMSswKQYDVQQDEyJhY2NvdW50cy5hY2Nlc3Njb250cm9sLndpbmRvd3MubmV0MB4XDTE2MTEyNjAwMDAwMFoXDTE4MTEyNzAwMDAwMFowLTErMCkGA1UEAxMiYWNjb3VudHMuYWNjZXNzY29udHJvbC53aW5kb3dzLm5ldDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKd6Sq5aJ/zYB8AbWpQWNn+zcnadhcMYezFvPm85NH4VQohTm+FMo3IIJl6JASPSK13m9er3jgPXZuDkdrEDHsF+QMEvqmffS2wHh3tKzasw4U0jRTYB0HSCbmnw9HpUnv/UJ0X/athO2GRmL+KA2eSGmb4+5oOQCQ+qbaRXic/RkAOLIw1z63kRneLwduQMsFNJ8FZbWkQFj3TtF5SL13P2s/0PnrqwGD59zcbDu9oHOtciu0h++YhF5CWdWEIgafcZk9m+8eY12BKamvPdBnyfpz6GVTenJQe2M+AGz5RSNshvI976VUbBiaIeNzvzaG91m62kFWLRqE3igq6D02ECAwEAAaMhMB8wHQYDVR0OBBYEFAgoZ9HLgFxH2VFGP6PGc4nFizD2MA0GCSqGSIb3DQEBCwUAA4IBAQBSFXalwSJP/jihg04oJUMV2MTbuWtuFhdrdXiIye+UNc/RX02Q9rxd46BfGeKEBflUgNfEHgyEiWTSLAOSDK70vu+ceCVQCGIQPjnGyYOpm80qAj/DNWZujVcSTTV3KZjMFsBVP7miQowfJQ58u9h8yuJHNhPpB2vOFmNhm4uZq3ve529Xt51HdtQGG9+Z9n1DhObqzkbz8xEFjA+KdfcRsZXa14ZkpAOe35VgyY0f8x34Y0LPfibWcNpfp0AhxKzyqT1GRRlKTjiBA6WNJIJIEeqh/nfOnwM0UQKRnt+2qeV3u00a5lrvJtEy7nq+s7xYtpVAsCvn5T0U1/8IHkxt-----END CERTIFICATE----- Entering the wrapped key into the Public Key or Certificate box on the website will successfully verify the signature of the JWT. [...]

Authorizing Access to Resources using Azure Active Directory

Tue, 24 Jan 2017 09:10:39 +0600

In my previous post I discussed authenticating a user using Azure Active Directory (Azure AD), returning an id_token that can be used to identify the user that has signed in. However, this token isn’t an access token and as such can’t be presented in order to access remote services. In this post I’m going to show how you can request an access token that can be presented in the Authorization header when calling a service. The workflow is very similar to the workflow to retrieve the id_token: - User attempts to sign into an application - Application launches the Authorize URL in an external browser (includes “resource” parameter in Authorize URL) - User is directed to a Microsoft URL where they are prompted to sign in - User signs in - User is prompted to consent that the application can access the specified resource - After sign in, the User is redirected back to the application via a custom protocol - Application receives an authorization code - Application performs a POST request to the Token URL in order to exchange authorization code for an access token - Application receives the access token - Application makes call to remote service, attaching the access token in the Authorization header To demonstrate this workflow I’m going to adapt the sample application I created in my previous post in order to access the Azure Graph API. The sample application is already configured with access to the Azure Graph API – this is done by default for all application registrations. In order to request access to a resource, you need to know the url of the resource you want to access. In this case the url for the Azure Graph API is In addition to including the resource url in the authorization url, the other change I need to make is to switch the response type from id_token to code. The updated authorization url is: var  authorizationUrl=    "" +    "client_id=40dba662-4c53-4154-a5cf-976473306060&" +    "response_type=code&" +    "redirect_uri=sample://callback&" +    "nonce=1234&" +    "resource="; Launching this url in the external browser will again prompt the user to sign in (unless they have previously signed in for this application, such as if you followed my previous post) but rather than immediately being redirected back to the application, the user will see a consent prompt – you’ll note that similar to the sign in prompt, the name of the application specified in Azure AD is used when requesting permissions. The other thing to note is that once you’ve consented permissions for the application, you won’t be prompted again – Azure AD remembers that you’ve granted permissions. Permissions can be revoked by going to, selecting the application and clicking Remove. The same “Remove” option is available if you turn on the new look (it’s the same url to get there – After completing the sign in and consent workflow, the user is navigated back to the application using the custom protocol. This time, instead of an id_token, the application receives a code as part of the url: sample://callback/?code=AQABAAAAAADRN…….L7YiQ7PIAA&session_state=ffffd83b-3820-489e-9f35-70e97d58fd04 Unlike the id_token, and as you’ll see soon, the access_token, the code is not a jwt token that you can interrogate for information about the signed in user. Instead the next step in the process is to do the code-token exchange to retrieve the required access token. This involves doing a POST req[...]

Authenticating an Application using Azure Active Directory

Mon, 23 Jan 2017 18:38:07 +0600

In my previous post I discussed in brief the use of the OAuth Authorization Code workflow and the corresponding endpoints. As a brief recap, the workflow I’ll going to walk through is the following: - User attempts to sign into an application - Application launches the Authorize URL in an external browser - User is directed to a Microsoft URL where they are prompted to sign in - User signs in - After sign in, the User is redirected back to the application via a custom protocol - Application receives the token containing information about the user that has signed in In this walk through I’ll use a UWP application but the workflow works well for any platform that supports custom protocol. Before I get started I’m going to need a few things: Authorize URL -{tenantId}/oauth2/authorizeTenant Id - (this can also be specified as a guid) Next, I’m going to register an application with Azure Active Directory (Azure AD). You can think of this registration as identifying the application that is going to connect to Azure AD. In the Azure portal, open the Active Directory management pane and select App registrations At the top of the app registrations pane, click the Add button. In the Create form I’ll give the application registration a name (the name is used both in the Azure portal as well as on the sign in page when a user is signing into the application). The application type needs to be set to Native – this allows the application to exchange an Authorization Code for an Access Token without having to provide a client secret. The final property is the Redirect URI, which is the URL that the browser will be directed to after the user has signed in. In this case I’m specifying a custom protocol which will be used to redirect back to the application. Once the application registration is complete I can copy the Application Id from the application pane. I have all the information I need in order to authenticate a user; all I need to do is form the authorization url that will be launched in the external browser: private async void AuthenticateClick(object sender, RoutedEventArgs e){    var  authorizationUrl=        "" +        "client_id=40dba662-4c53-4154-a5cf-976473306060&" +        "response_type=id_token&" +        "redirect_uri=sample%3A%2F%2Fcallback&" +        "nonce=1234";    await Launcher.LaunchUriAsync(new Uri(authorizationUrl));} There are various components to the authorization url: – The tenant where the application is registered. Alternatively use “common” for multi-tenanted applications. 0dba662-4c53-4154-a5cf-976473306060 – This is the Application ID (also referred to as the client ID) of the application registration in Azure AD id_token – This is the requested response, which in this case is a JWT token that represents information about the user. When using OAuth to authorize access to a resource, either specify “code” or “code id_token”. sample://callback – This is the url that the browser will be redirected to after the sign in process has been completed. 1234 – This is a application specific nonce which can be used to ensure the returned token matches the authentication request from the application. In order for the user to be redirected back to the UWP application the “sample” custom protocol needs to be registered to the applicati[...]

Azure Active Directory and Google OAuth 2.0 Endpoints

Sun, 22 Jan 2017 17:46:31 +0600

There are a lot of arguments for and against using pre-built SDKs for doing OAuth authentication with Azure AD and Google. Having worked with the ADAL library for Azure quite a bit I think the team have done a reasonable job, especially considering it now works across the three mobile platforms (iOS, Android and Windows), and works with a PCL that is .NET Standard based. However, using any library does force you into working within the bounds of the library. For example, we recently found two shortcomings in the library: - It doesn’t provide a good solution for doing a browser based workflow for signing in – instead it uses a webview hosted within the context of the application (depending on the platform this may be augmented with more security, for example the Web Authentication Broker - A browser based workflow involves launching the external browser for the user to sign in; upon successful sign on, the user is redirected back to the app. - It doesn’t provide a mechanism to clear cached credentials. Whilst the tokens can be cleared, this doesn’t clear the cookies held within the hosted webview, which can lead to issues if the application is multi-tenanted. If the provided libraries don’t align with what you want, you may have to roll your own solution. The Authorization Code workflow requires two endpoints: - Authorize URL – this is the URL that you navigate the user to in order for them to sign into your application. After signing in an Authorization Code is returned to the application - Token URL – this is the URL that the application does a POST request to in order to convert the Authorization Code into an access token. For Azure Active Directory, these endpoints are: Authorize -{tenantId}/oauth2/authorize Token -{tenantId}/oauth2/token For Google, these endpoints are: Authorize - Token - As both services conform to OAuth/OpenID Connect, the parameters are the same, although there are some variations on the values that you need to supply for scope and client id. [...]

Useful OAuth, OpenID Connect, Azure Active Directory and Google Authentication Links

Sun, 22 Jan 2017 12:31:46 +0600

Over the past couple of weeks I’ve been assisting with the development work of an enterprise system that uses both Azure Active Directory (Azure AD) and Google to authenticate users. It’s a cross platform solution which means we need code that works across both authentication platforms, and the three mobile platforms. Unfortunately this is easier said than done – The Azure AD team have done a reasonable job with the ADAL library but it’s not like we can repurpose that library for authenticating against Google. This is a tad annoying since both Azure AD and Google both use OAuth and OpenID Connect, so you’d expect there to be a good library that would work across both. In trying to find a workable solution I can across a number of links that I want to bookmark here for future reference: OAuth 2 Home - The OAuth home page is a good starting point if you want to get more links and information about OAuth (1 and 2) but I actually found it’s main use for me was to point at the OAuth 2.0 Framework RFC OAuth 2.0 Framework RFC - You can think of the OAuth 2.0 Framework RFC as being the specification for OAuth 2.0. There are some extensions and other standards that relate to OAuth 2.0 but this is a must read if you want to understand what OAuth 2.0 is all about. You may need to refer back to this when reading other blogs/tutorials as it can help clarify what each of the roles and responsibilities are in the process. Simple overview of OAuth 2 - This overview provides a quick summary of the various flows for OAuth 2.0. However, I disagree with the use of the implicit workflow for mobile applications. Whilst mobile applications are not “trusted,” which would normally imply the use of the implicit workflow, the reality is that the implicit workflow can’t issue refresh tokens. This means that unless you want your users to have to log in each time they use your mobile application, you need to use the Authorization Code workflow (the client secret shouldn’t be required when requesting access tokens for mobile apps – this depends on which authentication provider you’re using).   OpenID Connect Home - The OpenID Connect home page is again a good starting point as it links to the many different parts of the OpenID Connect standard. OpenID Connect builds on top of OAuth 2.0 in order to provide a mechanism for users to be authenticated as well as authorized for resource access. In addition to the creation of access tokens, OpenID Connect defines an id_token which can be issued in absence of any resource that is just used to identify the user that has authenticated. OpenID Connect Core 1.0 - This is the core specification of OpenID Connect. Similar to the specification for OAuth, this is worth both a read and to be used as a reference when working with OpenID Connect implementations. OpenID Connect Session Management 1.0 - Whilst still in draft this standard covers how implementers are supposed to handle log out scenarios, which is useful as your application can’t simply delete it’s access tokens when a user opts to log out. Ideally when a user logs out, you’d want to make sure both cached tokens are cleared, along with invalidating any access or refresh tokens.   Google OAuth 2.0 Overview - OpenID Connect - Google’s documentatio[...]

NuGet does my head in….. No thanks to Xamarin.Forms

Tue, 06 Dec 2016 12:52:59 +0600

This is a bit of a rant with hopefully a fix that will help others. Firstly, the rant: In my post on Building Cross Platform Apps I used the new project templates in Visual Studio 2017 to create a new Xamarin.Forms application that targets iOS, Android and UWP. What I didn’t mention is the time I wasted trying to upgrade NuGet packages in order to get the thing to build and run. Namely I get the following exception when attempting to build the newly created application. Severity    Code    Description    Project    File    Line    Suppression StateError        Exception while loading assemblies: System.IO.FileNotFoundException: Could not load assembly 'Xamarin.Android.Support.v7.RecyclerView, Version=, Culture=neutral, PublicKeyToken='. Perhaps it doesn't exist in the Mono for Android profile?File name: 'Xamarin.Android.Support.v7.RecyclerView.dll'   at Java.Interop.Tools.Cecil.DirectoryAssemblyResolver.Resolve(AssemblyNameReference reference, ReaderParameters parameters)   at Xamarin.Android.Tasks.ResolveAssemblies.AddAssemblyReferences(DirectoryAssemblyResolver resolver, ICollection`1 assemblies, AssemblyDefinition assembly, Boolean topLevel)   at Xamarin.Android.Tasks.ResolveAssemblies.Execute(DirectoryAssemblyResolver resolver)    App5.Droid            I figured that there was something wrong with the Xamarin.Forms and/or Xamarin Android NuGet packages, so I thought I’d just go an upgrade the Xamarin.Forms NuGet package. Note: Don’t every try to upgrade the Xamarin Android packages independently – let the Xamarin.Forms NuGet package determine which versions of those libraries you need. Anyhow, unfortunately that just generates another exception: Severity    Code    Description    Project    File    Line    Suppression StateError        Unable to resolve dependencies. 'Xamarin.Android.Support.Compat 24.2.1' is not compatible with 'Xamarin.Android.Support.Design 24.2.1 constraint: Xamarin.Android.Support.Compat (= 24.2.1)'.            0    At this point I was starting to get annoyed – it looks like there was already an inconsistency between the Xamarin.Forms package and the Xamarin Android packages included in the template. Luckily it was easier to fix than I thought it was. I opened up the packages.config file for the Android head project and delete the Xamarin.Android libraries.             Then, I just upgraded Xamarin.Forms to the latest NuGet package and all was good again. Lastly, I upgraded all the other NuGet packages (excluding the[...]

When I grow up I want to be a .NETStandard Library!

Tue, 06 Dec 2016 11:48:06 +0600

A month or so ago we made the decision to upgrade some of the portable libraries we use for projects from being based on PCL Profiles (eg profile 111 or profile 259) across to .NET Standard. To do this, we followed the upgrade prompt in the project properties page in Visual Studio. After clicking the “Target .NET Platform Standard” you can then pick which .NET Standard you want. Where possible we try for .NETStandard 1.0, which aligns with PCL Profile 259, or .NETStandard 1.1, which aligns with PCL Profile 111 (see The theory being that a .NETStandard 1.0 library can be consumed by any .NETStandard library 1.0 and above, as well as any PCL profile library that is profile 259 – in other words, we’re aiming for maximum reach. Take for example, BuildIt.General, which is a general purpose utility library. This is now a .NETStandard 1.0 library….. or so we thought. I was doing some testing on the current stable version of the library to make sure it could be added into a Xamarin Forms application. In my previous post I showed how easily you can create a new Xamarin Forms project that has a PCL that contains all the UI for the application, and then head projects for each target platform (iOS, Android, UWP). Before adding in BuildIt.General I upgraded all the existing NuGet references – always good practice when creating new project (disclaimer: this sounds easier than it is due to an issue with the current Xamarin.Forms template which makes it hard to upgrade – more on this in a future post). Next I added a reference to the BuildIt.General NuGet package – I made sure it was added to all projects as it includes a UWP library that has some useful extensions such as converters that are specific to UWP. At this point everything was able to build and I was able to run the UWP application (I didn’t get round to running the other platforms but I assumed that they would also work since I hadn’t really modified anything from what comes out of the box). I then wanted to test that I could invoke functions from the BuildIt.General library, so I added the following into the MainPage.xaml.cs file within the Xamarin.Forms PCL library: LogHelper.Log("test"); Now, when I attempted to build the library I got the following error: The primary reference "BuildIt.General, Version=, Culture=neutral, processorArchitecture=MSIL" could not be resolved because it was built against the ".NETPortable,Version=v5.0" framework. This is a higher version than the currently targeted framework ".NETPortable,Version=v4.5,Profile=Profile259". This completely confused me….. what’s this reference to .NETPortable and what’s the difference between v5.0 and v4.5. Immediately I thought that it was an issue with Visual Studio 2017 but after a bit of investigating it turns out that it’s a more fundamental issue with the way that we, actually MSBuild/Visual Studio, is creating the BuildIt.General library. I came across this thread which includes this comment: (Oren has one of the best explanations on the whole .NET Standard v’s PCL Profile discussion at Anyhow, I wondered whether our library was suffering from the same ill fate. Using ILSpy I took a look at what was included as the TargetFramework and sure enough it’s listed as .NETPortable, v5.0. The workaround for this was listed in this changeset. In summary it requires a change to the project file to override the default generation of the Target Framework Moniker (See bold pieces in the follo[...]

Building Cross Platform Applications with Visual Studio 2017

Mon, 05 Dec 2016 10:56:01 +0600

Ok, before I jump into this, I want to point out a couple of things: - Visual Studio 2017 is still in RC - When you install the RC of Visual Studio 2017 and enable the options to install Xamarin, you will break any existing Xamarin support for earlier versions of Visual Studio (see I noticed the other day when I was creating a throwaway project is that there’s a new dialog when creating cross platform applications within Visual Studio 2017. In the New Project dialog, select the Cross Platform node and there is a single Cross Platform App option. Selecting this option presents you with some project templates – there are only a couple at the moment but I’m hoping that they provide more examples. Unfortunately, and it’s a little hard to see in this screenshot, for the Master Detail template, you can only select Shared Projects. Without exploring the template, I can only assume it was setup this way to share UI code that has platform specific code that compiles based on conditional flags. I really like the way that you can switch between a Forms application and a Native (and no they don’t literally mean “native” (eg Objective-C/Swift or Java/C++), they mean traditional Xamarin where you have platform specific UI. I opted for the “Blank App (XAML)” which requires Forms, I selected PCL, but unfortunately the “Host in the cloud” option was then disabled. After hitting Accept, the new solution was created and I was immediately able to build and run the application across iOS, Android and Windows (UWP). Very nice. [...]

Building a Mulit-Tenant Rich Client (UWP) that Connects via an Azure AD Protected WebAPI Through to an External Service

Thu, 20 Oct 2016 06:56:09 +0600

Wow, that title is a mouthful. Just reading that makes you think that this is some weird edge case that you’d never have to deal with but this is actually quite a common scenario. Let me give you the scenario we were trying to get to work: - The user of the rich client application (I’ll use a UWP application but could be an iOS, Android, WPF, or in fact an external web site) needs to retrieve some information from their Office365 account. The required information isn’t currently available via the Graph API, so it requires the use of the Exchange Web Service (EWS) library, which is really only usable in applications written using the full .NET Framework (ie no PCL support). This means any calls to Exchange have to be proxied via a service (ie Web API) interface. Routing external calls via a service proxy is often a good practice as it makes the client application less fragile, making it possible to change the interactions with the external service without having to push out new versions of the application. - Access to the WebAPI is protected using Azure AD. The credentials presented to the Web API will need to include appropriate permissions to connect through to Exchange using the EWS library. - The user will need to sign into the rich client application. Their credentials will be used to access the Web API, and subsequently Exchange. - Any user, from any tenant, should be able to sign into the rich client application and for information to be retrieved from their Exchange (ie Office 365) instance. In other words both the rich client and Web API need to be multi-tenanted. Single Tenanted I’ll start by walking through creating a single tenanted version of what we’re after. For this, I’m going to borrow the basic getting started instructions from Danny Strockis’ post ( Creating the Applications Let’s start by creating a new solution, Exchange Contacts, with a Universal Windows Platform (UWP) application, call it ExchangeContactsClient. Next we’ll create a new ASP.NET Web Application project, call it ExchangeContactsWeb For the moment I’m not going to setup either Authentication or hosting in Azure Next I need to add the Active Directory Authentication Library (ADAL) NuGet package to both the WebAPI and UWP projects To make it easy to debug and step through the interaction between the UWP application and the WebAPI I recommend setting Visual Studio up to start both projects when you press F5 to run. To do this, right-click the solution and select Set Startup Projects I’d also suggest at this point running the solution to firstly make sure both applications are able to be run (ie that they were created correctly by Visual Studio) – I had to upgrade the Microsoft.NETCore.UniversalWindowsPlatform NuGet package in order for the UWP project to run. You also need to retrieve the localhost URL of the WebAPI that will be used later. Local WebAPI URL: http://localhost:8859/ Configuring Azure AD Before we continue developing the applications we’ll step over to Azure and configure the necessary applications in Azure Active Directory (Azure AD) – When you land on the Azure portal, make sure you’re working with the correct subscription – this is shown in the top right corner, under your profile information. In this case I’m going to be working with our Office 365 developer subscription (available for MSDN subscribers) called BTR Office Dev. This is associated with t[...]

Using BuildIt.States for Managing States

Sun, 18 Sep 2016 19:26:10 +0600

Almost every software development course at some point covers object orientated programming where the concept of encapsulation is drummed in. We’re taught to create classes that track state – a typical example is an Order that has a series of items, and then goes through various states as the order is processed. All this information is held within the Order object, including some basic functions such as calculating the total of the items being ordered. As the systems we work on grows in complexity, so does the complexity of the classes that are used to encapsulate aspects of the system. Unfortunately this often results in large, bloated classes, where it’s hard to make changes without fear of breaking other parts of the system. One common mistake that seems to creep in over time is that it becomes difficult to manage what state a class is in. This can result from either a lack of documentation about what states the class can be in, or the states themselves aren’t clearly defined. Take the Order example, the Order class may have properties such as OrderTimestamp and OrderId which are set when the Order is placed. If the Order is rejected, the OrderTimestamp may be reset, leaving the class is some weird state where it has an OrderId, yet no OrderTimestamp. This is one of the scenarios that BuildIt.States is designed to help with. Before we jump in, we need to set down a basic understanding of how we define states. States can appear in groups but within a group, there can only ever be one current state. For example in the Order scenario, there might be a group called TransactionStates, and the Order can only be in one of the states, Unprocessed, Processing, Processed, OrderRejected, at any given time. This leads me to the starting point of working with BuildIt.States: I’m going to define an enumeration that defines the states that my class can go between – you’ll notice this includes a Default (I often use Base instead) which BuildIt.States will use to determine if the state has been set or not. public enum MainStates{    Default,    One,    Two,    Three,    Four} The next thing we need is our class, in this case one that inherits from NotifyBase, and raises a PropertyChanged event when the CurrentState property changes. public class MainViewModel:NotifyBase{    private string currentState;     public string CurrentState    {        get { return currentState; }        set        {            currentState = value;            OnPropertyChanged();        }    }} Defining the states that this class can be in is as simple as adding a reference to the BuildIt.States NuGet package and then defining each of the states: public IStateManager StateManager { get; } = new StateManager(); public MainViewModel(){    StateManager        .Group()         .DefineState(MainStates.One)        .ChangePropertyValue(vm => CurrentState, "State One")         .DefineState(MainStates.Two)        .ChangeProper[...]

Migrating BuildIt to .NETStandard

Sun, 18 Sep 2016 18:39:31 +0600

At the beginning of the year I blogged quite a bit about a set of libraries that we use at Built to Roam that are available either from NuGet ( or as source on GitHub ( These started a few years ago as an accumulation of helper functions for building, as the time Windows Phone applications. Since then the libraries have matured a bit and have evolved somewhat. The BuildIt.General library is still just a collection of helper method, whilst some, like BuildIt.Lifecycle, are more framework in nature. Over the coming weeks we’re investing a bit into tidying up these libraries, including: Migrating any of the libraries that were targeting PCL profile 259 across to .NET Standard 1.0 Updating, where possible, all NuGet references Setting up an automated build and release process, including support for publishing directly to NuGet Rationalise the number of libraries on NuGet – namely where there are platform specific libraries, these will be combined in with the core library (eg BuildIt.General.UWP will get packaged with BuildIt.General and will be installed for UWP projects) Add some additional packages BuildIt.Media.UWP – This allows you to easily add Cortana voice commands for controlling media playback with the MediaElement control BuildIt.Web.Config (this name may change) – This allows you to easily define application configuration values server side and have them flow to the client application Right now, we’re going through and migrating the libraries to .NET Standard. This is a painful process as there are often NuGet references that don’t work with .NET Standard – we’re either removing these or deciding to wait on the package author to update. During this phase a lot of the packages will be marked as beta -  you should be able to start using these libraries but just be aware we haven’t completed the migration, so things may change over the coming weeks. [...]

NetStandard, what is it and why do I care?

Sun, 11 Sep 2016 04:04:36 +0600

Over the past couple of years there hasn’t been a single project where at some point or another it hasn’t encountered NuGet pain – this is the pain associated with trying to upgrade packages, only to discover that you have to carefully select specific package versions due to incompatibilities, and then try and fight the nasty Visual Studio NuGet package manager interface to get them to install correctly. I’m no expert on topics like PCL Profiles, so I thought that I’d spend some time researching a bit more about what’s going on, and just why NuGet seems to fragile. This interest was also driven by the recent release of the .NET Platform Standard (ie NetStandard) and the fact that I need to migrate the BuildIt libraries we use, away from PCL Profiles (yay!) across to NetStandard.

As I said, I’m no guru on this, so rather than post third or fourth hand info. Here’s where I started: Oren Novotny’s post entitled Portable- is dead, long live NetStandard. Start there, and make sure you follow the links in his post as they provide some essential background reading on the topic.


Cryptic Build Failure in Release Mode for Universal Windows Platform Application

Wed, 23 Mar 2016 11:27:30 +0600

We’ve just managed to solve an issue that’s been plaguing us for a while when we attempt to do a Release build (either via the Store –> Create App Package, or just by setting the build configuration to Release) we were seeing an error message similar to: RHBIND : error RHB0011: Internal error: 'declModule == m_pLoaderModule' This error makes no sense, and there’s nothing at all helpful to go on from any of the usual logs etc. A quick search only returned two links, one of which was to a GitHub thread ( posted by Microsoft that talks about capturing a .NET Native Repro…. not quite what I was interested in. However, further down the post there’s a section entitled “Compilation Failure on Update 1” which lists almost exactly the error we were seeing. The section refers to a new feature called SharedLibrary but doesn’t really talk about how to turn it on or off. It does however link to another post Initially I didn’t think that there was anything relevant in the post, since it was entitled “What’s new for .NET and UWP in Win10 Tools 1.1” and starts off talking about app-locally and shared appx framework packages. It then talks about how to enable this feature….but in the other post it said that SharedLibrary was on by default…. Anyhow, instead of following this post and setting the UseDotNetNativeSharedAssemblyFrameworkPackage to true (ie enabling it), I figured I’d try setting it to false (ie true). For example the Release section of the UWP project file now looks like:   bin\x86\Release\  TRACE;NETFX_CORE;WINDOWS_UWP  true  ;2008  pdbonly  x86  false  prompt  true  truefalse Doing this fixed the Release build for this application and allows us to create a package ready for Store deployment. Hope this helps others. [...]

Updating BuildIt

Sun, 06 Mar 2016 19:03:00 +0600

It's been a while but I've just published an update to the BuildIt libraries with the aim to streamline the syntax for declaring states. For example, the following are the state declarations for the Flickr viewer sample application:
   .WhenChangedTo(vm => vm.Load())
   .OnDefaultCompleteWithData(vm => vm.SelectedPhoto)
         ((vm, d) => vm.Photo = d)


Triggering State Changes in BuildIt.States

Thu, 18 Feb 2016 08:12:19 +0600

The state management capability that BuildIt.Lifecycle uses is actually provided by a standalone library BuildIt.States which focusses on tracking states and allowing transitions between states. BuildIt.Lifecycle builds on this to adapt it to generating view models for specific states, along with the glue that connects states and transitions to the appropriate platform implementation (eg Pages and Navigation). The nice thing about the state management library being separate is that it can iterate independently. In this case, we’ve updated the library to include support for triggers. One of the new features added to the Universal Windows Platform (UWP) for Windows 10, are state triggers. Out of the box, UWP comes with an AdaptiveTrigger class which is used to trigger a state change once a minimum width or height (or both) have been reached by a Window. The way triggers work is that they evaluate a particular condition and once the condition has been met, the IsActive property is changed to true. Once all triggers for a particular state are set to IsActive, the framework changes the active visual state. We’ve adapted this concept to apply it to state management within the BuildIt.States library. Taking the FlickrViewer application that I discussed in my post, Building a Flickr Viewer App with BuildIt.Lifecycle, I’ll modify the MainViewModel to use triggers. Here you’ll see that in addition to adding triggers to the state definition, we’re also making use of an instance of the LoadingManager. A common scenario is to have states for when data is being loaded, and when it has been loaded. To handle this, the state management library includes a LoadingManager class which tracks which page/viewmodel state is used for both Loading and Loaded states. The nice artefact of using the LoadingManager is that the MainViewModel doesn’t explicitly need to indicate when it’s loading or not loading data. Instead it simply wraps the loading operation in a using statement. Behind the scenes this links to the LoadingTrigger instances used in the state declaration. public enum MainStates{    Base,    Loading,    Loaded} LoadingManager LoadingManager { get; } = new LoadingManager(){    LoadingState = MainStates.Loading,    LoadedState = MainStates.Loaded}; public MainViewModel(IFlickrService flickr){     StateManager        .Group()        .DefineState(MainStates.Loading)            .AddTrigger(new LoadingTrigger(LoadingManager) {ActiveValue = MainStates.Loading})        .DefineState(MainStates.Loaded)            .AddTrigger(new LoadingTrigger(LoadingManager) {ActiveValue = MainStates.Loaded});    Flickr = flickr;} public async Task Load(){     using (LoadingManager.Load())    {        var photos = await Flickr.LoadPhotos();        photos.DoForEach(Photos.Add);    }          } [...]

Building a Flickr Viewer App with BuildIt.Lifecycle

Tue, 16 Feb 2016 10:14:00 +0600

Last week at the Mobile .NET User Group I presented on the BuildIt.Lifecycle for the first time. The main motivation wasn’t to try and get people to start using it (although I’d definitely welcome feedback from anyone who does), it was really to try to get developers to start thinking differently about the way that they write applications. Instead of thinking of their application in terms of the apis that any one platform offer (and I treat XForms as just another platform, even though it sort of in-part aggregates iOS/Windows and Android) or in terms of raw navigation constructs (eg Activities/Intents, ViewControllers or Pages), but more in terms of states within their application. Applications are littered with states, whether they be the states of a button (Normal, Focussed, Pressed, ToggledOn etc) or visual states of a Page (a Windows concept but you can think of visual states as being the different layouts of any given page/view of your application). In fact if you think of your application as a whole you realise that you can map the entire navigation within your application as just a series of state transitions, with your pages/views being the states that the application can be in. This is indeed the foundation of BuildIt.Lifecycle. One other point I’ll make before jumping into building the Flickr Viewer application, which should demonstrate the core of the BuildIt.Lifecycle framework, is that most application frameworks, at least for mobile platforms, have been built around the notion of a single window or frame that houses the content that the user is currently looking at. As such most of the frameworks have some metaphor that maps to the application as a whole (often called the App or Application class), and is assumed to be a singleton. In exploring the use of states to model applications, it became evident that there are some platforms, such as Windows and Mac, where this assumption breaks down, and that in fact the framework needs an additional metaphor that maps to each window within an application. I’ve settled on Region for the timebeing, and there’s no surprises that there is a mapping between an instance of a Region in the framework to a Window presented at runtime. This should make more sense once we explore creating the application; alternatively there’s a post on working with Additional Windows using Regions with BuildIt.Lifecycle. Let me start by giving an example of visual states, just to warm up and get us all thinking about using states to represent an application, or parts of the application. I’m going to start by creating a new Universal Windows Platform project, based on the Blank App (Universal Windows) project template, called FlickrViewer. Next I’ll create another project, this time called FlickrViewer.Core, based on the Class Library (Portable) template, keeping the defaults in the Add Portable Class Library targets dialog. To complete the setup of the projects, I just need to add a reference from the FlickrViewer UWP project to the FlickrViewer.Core PCL project. The last reference I’m going to add initially is to the Microsoft.Net.Http and Newtonsoft.Json NuGet packages (Note that under the Install and Update Options the Dependency behaviour is set to Highest, and both packages are install into both projects).   I’m going to add the FlickrService class (and correspon[...]

Conditional Back Button Visibility with BuildIt.Lifecycle

Sat, 06 Feb 2016 04:12:13 +0600

In my post, Adding Back Button Support to Xamarin.Forms with BuildIt.Forms, I showed the basic support I’d added to BuildIt.Lifecycle to handle showing/hiding the back button depending on whether there were previous states. The back button visibility is actually more complex than that as there may be conditions on the current page (ie the current state/view model) that determine whether clicking back should be possible. To keep things simple I’m going to assume that if clicking back is possible, the back button will be shown, otherwise it will be hidden – this may not suit everyone as you may want to show the back button but have it disabled (personally I don’t like this, as it’s hard for the user to understand why the back button is disabled). I’ve just added support to the states and corresponding viewmodel management to detect if the current view model is in a blocked state.

There is an interface IIsAbleToBeBlocked that a view model can implement in order to both return an IsBlocked property and raise an event when it changes. This interface has also been added to the BaseViewModel class to make it easier for developers. For example this method can be used within a view model to toggle the visibility of the back button. If IsBlocked is true, the back button will be hidden.

public async void Run()
    for (int i = 0; i < 10; i++)
        IsBlocked = !IsBlocked;
        await Task.Delay(1000);

Note that the IsBlocked property works in conjunction with the HasHistory property on the state group (which maps to the back stack at a page level). If there is no history in the state group (ie at the first stage of the application – the first page of the application), then the back button won’t be shown, irrespective of the IsBlocked property.


Arriving or Leaving a ViewModel with BuildIt.Lifecycle

Wed, 03 Feb 2016 18:35:39 +0600

Often when arriving at a page/view it’s necessary to invoke some code in order to load data, or refresh the contents on the page. Of course, this needs to happen within the ViewModel. For BuildIt.Lifecycle, this is means implementing the IArrivingViewModelState interface which defines a single method, Arriving, which will get invoked when the user arrives at the page/view (and the corresponding viewmodel).

public async Task Arriving()
    await Task.Delay(2000);
    Name += ".... arrived ....";

Note that the method returns a Task, so you can do asynchronous code in order to retrieve data.

On the Windows platform the Universal Windows Platform has methods OnNavigatedTo, OnNavigatingFrom and OnNavigatedFrom which can be overridden in the code behind of a Page. These correspond to when a user arrives, is about to leave and has left a page. The Arriving method in BuildIt.Lifecycle maps to the OnNavigatedTo method. There are two more interfaces, IAboutToLeaveViewModelState and ILeavingViewModelState, which can be used to define methods AboutToLeave and Leaving. These methods map to the OnNavigatingFrom and OnNavigatedFrom methods.

One thing to note about the AboutToLeave method is that it has a CancelEventArgs as a parameter. The code in the AboutToLeave method can set the Cancel property on this parameter to true in order to cancel the navigation away from the page (note that this maps to cancelling the state change which drives the change in pages).

public async Task AboutToLeave(CancelEventArgs cancel)
    cancel.Cancel = true;


Adding Back Button Support to Xamarin.Forms with BuildIt.Forms

Sun, 31 Jan 2016 17:54:50 +0600

In my previous post I discussed the support I added for hardware and virtual back buttons in a Universal Windows Platform application (UWP) using BuildIt.Lifecycle. At an application level the back button (irrespective of whether it’s a hardware back button, such as on a phone, or a virtual back button, such as in desktop or tablet mode) is designed to return the user to the previous state of the application. In most cases this correlates to going to either the previous page in an application, or to the previous application, if the user is on the first page of the current application. Occasionally, there may be sub-states within a page, in which case the back button should be able to step back through those states, before causing the application to go back to the previous page. This means that the back button should be applied at the highest level of the active window, which means passing it into StateManager of the region that correlates to the active window. In a UWP application, this is relatively straight forward as the application can subscribe to the BackRequested event, for example: SystemNavigationManager.GetForCurrentView().BackRequested += BackRequested; In the event handler, the StateManager for the region is queried to find out if a previous state exists. If it does, a call to GoBackToPreviousState is made. Unfortunately Xamarin.Forms (XForms) doesn’t offer a global event for the back button. Instead it’s up to every page to override the OnBackButtonPressed method and return true if the back navigation should be cancelled. Having every page of the XForms application inherit from a custom base page isn’t an option I wanted to force upon users of this library. Luckily it’s not necessary to intercept the OnBackButtonPressed method on every page; it can instead be intercepted in the NavigationPage at the root of the application. By providing a CustomNavigationPage class, and requiring that it be used as the root of the application, it’s possible for the back button to be intercepted and applied to the active region. [...]