Subscribe: Glavs Blog
http://weblogs.asp.net/pglavich/Rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
api  azure  cache  cacheadapter  client  code  configuration  csharpcode  glav cacheadapter  glav  google  net  redis  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Glavs Blog

Glavs Blog



The dotDude of .Net



 



CosmosDb and Client Performance

Tue, 06 Feb 2018 09:20:00 GMT

Introduction CosmosDb is the "planet scale" document storage (I love that term) schema-less (NoSQL) system that is offered by Microsoft Azure. It offers guaranteed performance based on how much throughput you select when provisioning it. This is measured using "Request Units" (RU's). It aims to offer virtually unlimited scalability and throughput through the use of partitioned collections spread across a series of high performance storage hardware (SSD's). This is a pretty lofty claim from Microsoft and like most people, you will want to perform a series of tests or due diligence to ensure you are getting what you expect. Furthermore, provisioning 10,000 request units of throughput doesn't really mean much if you have nothing to compare it to.With that in mind, this post is specifically about measuring the performance of CosmosDb. However, it is less about the server side engine, and more about the client side. This post will look at insert performance, and specifically look at the client side settings that can be tweaked to increase performance. This is important as you may be hampered by the client side in terms of performance, instead of the server side. We will target a "single instance" type scenario to keep things consistent, that is, a single instance of a client hosted in various scenarios or environments. This way, we could have throughput metrics for a single unit, which we could then use when scaling out to estimate our overall throughput. Warning: The post is quite lengthy. If you are in a real hurry or just do not want to read the full post, just skip to the end. Knowledge pre-requisites This post assumes you know generally about CosmosDb within Azure. Bonus points if you have used the client library to program against an actual CosmosDb instance and also provisioned a CosmosDb instance. The tests refer to particular options to use within the client library itself.   Performance testing and "feeding the beast" If you have ever done any degree of performance testing, there are a number of challenges. One of which is getting enough data or simulated load to the test system to ensure it is handling the actual load that you expect and want to measure. If you are load testing a web server and trying to simulate millions of users, or even thousands of users, the client that is simulating those users needs to be able to push that much load to the system. Constraints such as memory, CPU usage and network bandwidth means that the maximum a single client can push to the test system is quite limited. Your server under test may look like it is handling things great, but in reality it is simply not getting the expected load or volume of requests to properly test it from the client, so your results are inaccurate.   With CosmosDb, your application is the client. In our scenario, we are using CosmosDb to serve the data needs of our application. We wanted to ensure CosmosDb actually did what was advertised, but we also wanted to know what it took to make our application efficient enough to be able to utilise the "planet scale" performance. Furthermore, if our application was limited in its ability to utilise CosmosDb, we could then limit the amount of throughput we have provisioned, therefore reducing our overall cost. CosmosDb is relatively new, and if you have ever used the client library (which is the recommended way, more details here) you will know there are quite a few knobs to turn to tune it, but often it is not clear what kind of performance advantage this offers. This document mentions a range of things you can do but do we implement them all? Which ones are the most effective? Specifically, we will look at variations when using the following settings: Connection Mode: Standard gateway with HTTPS or Direct Mode using TCP Variations on number of concurrent tasks and number of operations per task Indexing mode used by Cosmos: Consistent or Lazy Number of cores, memory and processing power. MaxConnections: Started at default.  Settings that remained static: Consistency mode: Eventual. Par[...]



Planning my development and growth - "The List"

Fri, 29 Dec 2017 02:16:00 GMT

Many times when speaking with colleagues and friends I get asked "What should I concentrate on from a tech perspective?". This may be to plan some professional development, or to try and steer your career into where you want it go, or simply to provide some guidance around the myriad of things to look at in the tech world. I like to do this for both personal and career goals, but for this post I'll stick to tech. The technology landscape is huge and it can be hard to focus when there are so may things that are interesting. You do not want to spread yourself "too thin" either, and tackle too many things otherwise you will either never get to it or it will seem overwhelming. The caveat here of course is that this works for me. It fits with how I personally like to work, how I like to make lists (I use Onenote) and how I like to approach my career. Doing what works for me. What I like to do in general is this. In the months leading up to the end of the year think about the new technologies and trends that may be important in the next year or even years. In the last month of the year, typically when some time is taken off (but even when I work through), I try to remove myself from most things technical related. This usually includes blogging, lots of social media, working on personal tech projects, constant email checking and those sorts of things. A disconnect. Pretty much a "mental reset" from a tech perspective. For me, this causes me to feel mentally invigorated when I tackle the new year, ready to immerse myself into the tech world again. I will caveat this that I do play my xbox quite a bit still. However, no "technical thought" is really required here. Just gaming for pure release. Coming towards the end of the year, start collating a list of things (in no particular order) of things I'd like to tack during that year. Technologies, courses, books and anything in between. I do mix personal stuff in here as well, for example, "Setup a vegetable garden". Revise that list as the year goes on and see how I am going. Make sure you have a way of storing these lists for later review. Again, I use Onenote as it is really easy and free flowing. It works the way I like to work and it syncs to the cloud meaning I can always review. It is really interesting to see the things I wanted to tackle many years ago. Finally, do not worry if important events during the year cause this plan or list, to change dramatically. It may be in the form of a career change, job change, or specific personal circumstances. Do not be afraid the remove stuff that does not apply anymore. Be open about things and allow yourself to change direction. I would hate to think I missed valuable opportunities just because I needed to do what I listed down. If need be, delete the whole list and start afresh. Whatever works. Point is, do not force it, just allow it to change and be fluid. That last point is important. It is not a definitive list. It will change during the course of the year and be revised, in fact it should be revised. Part of this is to "be like water". Go with the flow. If you hit a roadblock, go around it and do not stress. If something you listed looks like it is becoming less relevant, less interesting or not a worthwhile goal for whatever reason. Ditch it. It is your list, not anyone else's and not meant to cause stress when you dont hit a goal, or hit a goal just because it was listed 6-8 months ago. To that end, I thought I'd share what planned or listed out last year, as part of my goals and direction, what I got to and what I did not get to. "My List" What I did accomplish Microsoft Cognitive services Felt pretty good about this one as I really got stuck into it and am still in the process of building an open source fluent API around it. Building the fluent API allowed me to really learn all about it. I did a presentation or two around it as well. One of the outputs is this Fluent API Service Fabric Some courses and proof of concepts. More than enough to know when, where and how to us[...]



Using Google as your identity provider with Azure API Management

Mon, 04 Dec 2017 03:40:00 GMT

The Use Case Microsoft Azure API Management is a cloud hosted service provided by Microsoft to easily manage your API (Application programming interface) solutions. Azure API management allows you to easily secure, measure, configure and provision multiple API solutions at scale, to any downstream API you create. Many companies use a combination of cloud providers to satisfy their business needs. One common provider is Google, particularly when it comes to their “G Suite” (formerly Google Apps). Productivity applications such as Google docs and Gmail are used extensively by many companies which means that the companies user base is already well established within that provider. This article targets a use case where a company may want to leverage the power of Azure API management, but also take advantage of their existing identity investment in the Google platform. Pre-Requisites To get the full benefit of this article, it is expected that you will be somewhat familiar with the following technologies: Azure API management including the developer portal. OAuth general knowledge Knowledge of google developer console a bonus but not necessary. OAuth all the things From a technical perspective, we want to ensure that the consumers of our API have a valid google account, that is, they are authenticated google users. To do that, we want to ensure that any consumer of the API provides us with a valid OAuth token as issued by google. We will rely on Google as the identity provider to authenticate a user and provide a token in the form of a JWT – JSON Web Token. This means that as part of the API request, the consumer will need to provide an ‘Authorization’ header with a valid ‘Bearer’ token in the form: Authorization: Bearer {OAuth_token} We will then configure the API management instance to verify that token is valid by asking Google to validate the passed in bearer token. The Plan To provide a seamless experience within the Azure API Management system when integrating with google, we are going to do the following: Setup application credentials within the google developer console to identify our API application. Configure the developer portal aspect of Azure API management to ensure that when developers use the portal to explore or test the API, the integration with google for token generation and validation is seamless. Configure Azure API Management to validate that all incoming requests have an OAuth token, and that google itself verifies the token. Setting up To begin this process, you need to go to the Google developer console for API’s and services to define a set of credentials that can be used. This is detailed in the following steps: Go to the Google developers API console located at https://console.developers.google.com/ If you haven't one already, google will prompt you to create a project. Enable 2 sets of API's, Google+ and Contacts API Note : Not 100% sure if you need both at this point. Create a set of credentials for an application. Click on Credentials - Create Credentials - OAuth Client Id Select Web application - give it a name You can place some redirect Uri's here but we can do that later Once created you are presented with a client id and client secret. You will need these later so store them.   Configuring Google within Azure API management There are 2 aspects of Azure API management that can be setup for integration with google. One is the actual token verification for API consumers, and the second is the developer portal component of Azure API management. The developer portal is where API consumers can register to receive subscription keys for use with the API, which are used to track usage metrics against the API. The developer portal will first authenticate users to use the developer portal, and additionally allow users to test the API itself from within the portal. Setting up the developer portal has a few more steps involved so we will concentrate on that part first. For th[...]



The Consumerisation of Artificial Intelligence

Wed, 29 Nov 2017 10:10:00 GMT

We live in very exciting times. Computing capabilities have increased exponentially in the last two centuries to allow us to easily utilise highly complex artificial intelligence functionality today. It all began with the first mechanical computer in the early 19th century by Charles Babbage. The concepts and work that resulted from that were iterated upon, which enabled the invention of the first programmable digital computer in the 1940s. Fast forward 70 years, and Artificial Intelligence is evolving into something that people can easily use, without having a degree in mathematics, access to enormous amounts of processing hardware or access to enormous amount of data. Where we are now There is still a long way to go, however progress is being made exponentially faster than ever before. Instead of being able to solve just mathematical problems, we can now detect emotion, perform facial recognition, identify landmarks or celebrities in images, understand spoken language and provide textual descriptions of images. More importantly though, these advanced services are now available to a broad general audience. Utilising cloud technology to provide these complex services, we are now at the tipping point where vast numbers of people are empowered to create solutions or applications that leverage this power. The vehicle for this is predominantly (but not limited to) the exposition of Application Pro amming Interfaces (API) to consumption by integrators or solution providers. The simplicity of these API’s lowers the boundary to what was previously complex and hard to leverage operations. Offerings There are quite a few vendors providing offerings in this space. Google, Microsoft, AWS, Salesforce to name a few. All provide different levels and types of usage of these advanced services. Microsoft is well positioned here with its Azure cloud and Cognitive Services technology stack. The remainder of this article will concentrate on the Microsoft offerings. It is not all easy While use of high level advanced artificial intelligence services is available today, the more involved technology such as machine learning, is widely used to build these consumer-friendly services. To show the spectrum on offer, the following image shows the broad Microsoft suite, known as the Microsoft AI Platform, with least complex on the left moving to more complex towards the right. Cognitive Services A set of API’s built using pre trained models and example data with easy to consume interfaces. The API’s allow developers to build more intelligence into their applications with little to no knowledge on how to build specific machine learning or artificial intelligence solutions. Machine learning and bots This features a toolset called “Cortana Intelligence Suite” to allow a relatively easy interface for the building and training of machine learning models. In addition, a “Bot Framework” is also available that developers can leverage to build applications with intelligent bots. These tools are a little more advanced than the Cognitive Services in that a higher degree of investment and development effort is required. Cognitive Toolkit This provides a framework to access the lower levels of machine learning through neural networks. It is predominantly script driven and very complex. This toolset provides the basis from which both Cognitive Services, Cortana Intelligence Suite and the Bot Framework are built. Large amounts of sample data and a good understanding of the mathematics behind machine learning are required. As such, the barrier for entry is typically quite high. An example Now that you have an idea of the scope of what is provided, we will show an actual example of how easy it is to consume the powerful services that Cognitive Services have to offer. Setup To utilise Microsoft Cognitive Services, you must first provision a service you wish to use in Azure. To do that you simply go into the Azure portal, and add a specific Co[...]



"Easy Auth" / App Service authentication using multiple providers

Wed, 07 Dec 2016 04:34:00 GMT

App Service authentication is a feature in Microsoft Azure that allows extremely easy setup of authentication using either: Active Directory Google Facebook Twitter Microsoft Account (MSA)   It is often referred to as "Easy Auth". This is due to how easy it is to setup and integrate into your app. Effectively, no code required (at least for authentication).   Lets say you have developed a web site and are deploying it to Azure as an App Service. Without doing anything in code, you can enable one of these forms of authentication from within the Azure portal. You can enable this feature by using the "Authentication/Authorization" option in the list of settings associated with an App Service. Any anonymous requests that come through are automatically caught by the pipeline that Azure injects, and an authentication process with the configured provider is performed. For example, if Google is specified and configured, then when an un-authenticated request comes in, the user is redirected to google for authentication, then redirected back to the site once this is successful.   This works great but has a few shortcomings. You can only specify 1 provider to use. For example, you can use Active Directory, or Google, but not both. Local development. You cannot enable this option locally so how do you develop locally with security enabled? Problem 1 - Using multiple providers While this is not supported out of the box, you can make it work. In order to do this, you will need to do the following: Ensure that the "Action to take when request is not authenticated" option in the azure portal is set to "Allow anonymous requests". Ensure your application requires authenticated users with a redirect to a login page. There are a few ways to do this. As an example, you can include the "Microsoft.Owin.Security " nuget package and place the following code in your Owin startup class:   public void Configuration(IAppBuilder app) {     app.UseCookieAuthentication(new CookieAuthenticationOptions     {         AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,         LoginPath = new PathString("/Login")                         }); } Provide a login page with options to the user about which provider to use when authenticating. This page should not require authentication.   These options should link directly to each provider instead of allowing azure to automatically redirect. The way to do that is by using the following link format: https://{your-host}/.auth/login/{provider} The available providers to use are "aad","google","facebook" and "microsoftaccount". As an example, let's say we have created an AppService that is hosted at http://awesomeapp.azurewebsites.net. The Url for the 'Login with Active Directory' button would be http://awesomeapp.azurewebsites.net/.auth/login/aad and for google it would be http://awesomeapp.azurewebsites.net/.auth/login/google Further to this, we need to include an extra parameter to instruct where the process should redirect to after successful authentication. We can do this with the 'post_login_redirect_uri' parameter. Without this, the process will redirect to a default 'Authentication Successful' page with a link to go back to the site. So the final Url (in the case of Active directory) will look like: http://awesomeapp.azurewebsites.net/.auth/login/aad?post_login_redirect_uri=/ With the two options now linking to each providers login option, and with a post login redirect in place, we are good to go. Problem 2 - Local Development  Obviously, when developing locally, we are not hosting in azure and so cannot take advantage of the "Easy Auth" functionality within the azure portal. In order to get around this, you can provide a further opt[...]



CacheAdapter 4.2.0 Released

Fri, 28 Oct 2016 05:26:00 GMT

The latest version of cache adapter has now been released and is available via Nuget or you can download the source code here. Note: For those who don’t know what cache adapter is, you can check out the wiki here or you can look at some previous blog posts here which also has links to other posts. The nuget packages for the core component and the package with all configuration and code examples are: Glav.CacheAdapter.Core - binary only ( http://www.nuget.org/packages/Glav.CacheAdapter.Core/ ) Glav.CacheAdatper - binary, configuration examples, code examples. ( http://www.nuget.org/packages/Glav.CacheAdapter/ ) Bitbucket code repository - complete source code for the CacheAdapter ( https://bitbucket.org/glav/cacheadapter ) What is in this release This is only a relatively minor release but does contain some useful additions, as well as addressing some outstanding issues. The list of items addressed in this release are: Addition of a licence file Issue #49 Fixing a bug where multiple configuration instances are created Issue #40 Fluent configuration Issue #50 Fluent Configuration Previously, configuration of the cache adapter in code wa a little tedious. Now however, you can do this: using Glav.CacheAdapter.Helpers; var provider = CacheConfig.Create()     .UseMemcachedCache()     .UsingDistributedServerNode("127.0.0.1")     .BuildCacheProviderWithTraceLogging(); provider.Get("cacheKey", DateTime.Now.AddHours(1), () =>     {         return GetMyDataFromTheStore();     }); Breaking Change: Normalised configuration There was previously 2 ways of expressing configuration in the configuration file. Due to a configuration system overhaul, this has now been changed to support only one. Specifically, the configuration section option has been removed and now only the method of configuration is supported. So if you had something like: memcached localhost:11211 That should now be changed to: That is about it for the changes. Hope you find them useful. Feel free to let me know any suggestions or even perhaps create a pull request for any features you think are valuable additions. [...]



Website performance – know what’s coming

Mon, 29 Aug 2016 08:02:12 GMT

If you live in Australia (and perhaps even outside of Oz), there has been a lot of attention on the Australian Bureau of Statistics (ABS) in regards to the Census 2016 website and its subsequent failure to adequately handle the peak usage times it was supposedly designed for. It was also reported that a Denial of Service attack was launched against the site in order to cause an outage, which obviously worked. One can argue where the fault lay, and no doubt there are blame games being played out right now. IBM were tasked with the delivery of the website and they used a company called "RevolutionIT" to perform load testing. It is my opinion that while RevolutionIT performed the load testing, IBM is indeed the one who should wear all the blame. Load testing and performance testing simply provide metrics for a system under given situations. IBM would need to analyse those metrics to ensure that aspects of the system are performing as expected. This is not just a one off task either. Ideally it should be run repeatedly to ensure changes being applied are having the desired effect.Typically, a load test run would be executed against a single instance of the website to ascertain the baseline performance for a single unit of infrastructure, with a "close to production version" of the data store it was using.Once this single unit of measure is found, it is a relatively easy exercise to extrapolate how much infrastructure is required to handle a given load. More testing can then be performed with more machines to validate this claim. This a very simplistic view of the situation and there is far more variables to consider but in essence, baseline your performance then iterate upon that.Infrastructure is just one part of that picture though. A very important one of course, but not the only one. Care must be taken to ensure the application is designed and architected in such a way to make it resilient to failure, in addition to performing well. Take a look at the graph below. Note that this is not actual traffic on census night but rather my interpretation of what may have been a factor. The orange bars represent what traffic was expected during that day, with the blue representing what traffic actually occurred on the day. Again, purely fictional in terms of actual values but not too far from what probably occurred. At popular times of the day convenient for the Australian public, people tried to access the site and fill it in.A naïve expectation is to think that people will be nice net citizens and plod along, happy to evenly distribute themselves across the day, with mild peaks. A more realistic expectation is akin to driving in heavy traffic. People don’t want to go slower and play nice, they want to go faster. See a gap in the traffic? Zoom in, cut others off and speed down the road to your advantage. This is the same as hitting F5 in your browser attempting to get the site to load. Going too slowly, hit F5 again and again. Your problem is now even worse than estimated expectations as each person can triple their requests attempting to be made.To avoid these situations, that you need to have a good understanding of your customers usage habits. Know or learn the typical habits of your customers to ensure you get a clear view of how your system will be used. Will it be used first thing in the morning while people are drinking their first coffee? Maybe there will be multiple peak times during morning, lunch and the evening? Perhaps the system will remain relatively unused for most days except friday to sunday?In a performance testing scenario, especially in a situation similar to Census where you know in advance you are going to get a lot of sustained traffic at particular times, you need to plan for the worst. If you have some degree of metrics around what the load might be, ensure your systems can handle far more than expected. At the very least, ensure that should[...]



Cache Adapter 4.1 Released

Thu, 03 Mar 2016 07:28:26 GMT

The latest version of cache adapter has now been released and is available via Nuget or you can download the source code here.Note: For those who don’t know what cache adapter is, you can check out the wiki here or you can look at some previous blog posts here which also has links to other posts.The nuget packages for the core component and the package with all configuration and code examples are:Glav.CacheAdapter.Core - binary only ( http://www.nuget.org/packages/Glav.CacheAdapter.Core/ )Glav.CacheAdatper - binary, configuration examples, code examples. ( http://www.nuget.org/packages/Glav.CacheAdapter/ )Bitbucket code repository - complete source code for the CacheAdapter ( https://bitbucket.org/glav/cacheadapter )Please note: This release would not have been possible without contributions from the community, most notably Katrash. Your contributions are much appreciated even if my response and eventual action is somewhat slow.Updated: I made an error with Glav.CacheAdapter.Core 4.1.0 package which included the readme, config transforms etc. This has now been fixed with the Nuget packages 4.1.1. There is no assembly change in these oackages, which remains at 4.1.0.What is in this releaseThis is only a relatively minor release but does contain some useful additions, as well as addressing some outstanding issues. The list of items addressed in this release are:Targetting .Net 4.5.Async method support.Configurable LoggingTargetting .Net 4.5CacheAdapter nuget package now only targets .Net 4.5 to allow the removal of Microsoft.Bcl packages from the dependency list which caused package bloat and was a much requested change for many users. These packages were only required when targetting versions of .Net prior to .Net 4.5 as the StackExchange.Redis package had these listed as dependencies. Glav.CacheAdapter depends upon StackExchange.Redis for its Redis support.Async Method SupportAsync method support is now available for all ‘Get’ CacheProvider operations.For all ‘Get’ operations that also implicitly add data to the cache when a cache miss occurs, there are now the async  equivalents of these methods that will return a ‘Task’.So if previously you did something like:var data = cacheProvider.Get(cacheKey, DateTime.Now.AddDays(1), () => {   return dataStore.PerformQuery();};you can now use the async form of that method that returns a task. Using the above example, this would now become:var data = await cacheProvider.GetAsync(cacheKey, DateTime.Now.AddDays(1), () => {   return dataStore.PerformQueryAsync();};Configurable logging. You can now disable/enable logging, log only errors, or log informational items which also includes errors. Previously this was left up to consumers to implement their own ILogging interface to allow whatever degree of confurability they want. As of this release, you now get the ability to perform basic configuration on the included logging component. This is specifed in configuration (like everything else), in the form: – This logs everything and is the default. This is how prior versions operated. – As the name suggests, only error information is logged. – Nothing is logged.That is about it for the changes. Hope you find them useful. Feel free to let me know any suggestions or even perhaps create a pull request for any features you think are valuable additions.[...]



Getting Docker running on my windows system

Thu, 10 Dec 2015 23:10:10 GMT

IntroThis post is about some of the issues I had installing the latest docker toolbox and how I went about solving them to be able to finally get docker working on my windows system.For those not familiar with docker and what it is/does, I suggest going here and reading up on it a bit.For the record, I have a fairly standard Windows 10 laptop, which was upgraded from Windows 8.1. Gigabyte P34v2 with 16Gb of memory and a 256Gb SSD. Nothing special.Installing dockerTwitter kindly informed me of a great blog post by Scott Hanselman around "Brainstorming development workflows with Docker, Kitematic, VirtualBox, Azure, ASP.NET, and Visual Studio" so I decided to follow the steps and give it a shot.I started following the steps involved, although I initially missed the part about disabling Hyper-V. So disable Hyper-V and then reboot. If you are doing Cordova/Ionic development and using some of the emulators accompanying visual studio that require Hyper-V, this may be somewhat inconvenient for you.Everything seemed to initially install fine. Docker installed all of its components including VirtualBox.Next step is to double click the 'Docker Quickstart terminal' to ensure everything is installed as expected.Problem 1: Docker virtual machine will not start.Docker terminal starts up and begins setting up the environment (creating SSH keys etc), creating the virtual machine within VirtualBox and starting that machine. However, the virtual machine simply would not start and the docker terminal reported the error and stopped.I loaded up KiteMatic which is the other utility application that the Docker toolbox installs to see if that could help. It has an option to delete and re-create the VM. So I went and did that, but to no avail. The VM gets deleted, recreated but will not start.I tried uninstalling and re-installing the docker toolbox, realised the VM remains in VirtualBox, so deleted that VM manually (it was named ‘default’), then un-installed and re-installed again but unfortunately no go.I loaded VirtualBox and tried to start the machine manually but no go. A dialog was shown with a 'Details' button which revealed the following error:Failed to open/create the internal network 'HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter' (VERR_INTNET_FLT_IF_NOT_FOUND).Hmm, not that informative but plonking the error code into google/bing revealed many posts of others having similar issues.Seems that the issue was within VirtualBox itself (note: I had version 5.0.10) and that any virtual machine that had a ‘Host-only’ network defined for that VM simply will not start. When the docker toolbox is installed it creates a virtual machine (typically named ‘default’) and also sets up a Host-only network adapter to allow the client tools to communicate with the VM. In addition, un-installing and re-installing the docker tools can create multiple instances of the Host-only adapter. I tried using a few variants but to no avail. it simply would not start.It seemed it was some networking issue with the VM setup. I had originally though it was related to bridged connections on my windows box. I had tweeted as much and Scott Hanselman suggested I run:netcfg –dwhich performs a cleanup of your network devices. I did that and not much changed. I did it again and it wiped out every network connection I had. Oh yes, wireless, ethernet all gone. I deleted the adapter from device manager and let windows rediscover my network devices. Small tip: Don’t do this. Obviously Scott was trying to be helpful and I was blindly trying to get things working.After much searchng I stumbled upon this thread which describes the problem I was having. Essentially, when VirtualBox installed under Windows 10 (I don’t know if this happens with all Windows 10 instances but it definitely happens some, also to some Windows 7 or 8 instances[...]



CacheAdapter 4.0 released with Redis support.

Thu, 19 Mar 2015 23:31:00 GMT

My 'Glav.CacheAdapter' package has recently had a major release and is now at version 4.0. If you are not familiar with what this package does, you can look at previous posts on the subject here and here. In a nutshell, you can program against a cache interface, and via configuration, switch between ASP.Net web cache, memory cache, Windows Azure Appfabric, memcached and now redis. The nuget packages for the core component and the package with all configuration and code examples are: Glav.CacheAdapter.Core - binary only Glav.CacheAdapter - binary, configuration examples, code examples. Bitbucket code repository - complete source code for the CacheAdapter The major new feature in this release is that we now fully support the Redis data store as a cache engine. Redis is an excellent data store mechanism and is used as a cache store in many circumstances. Windows Azure has recently adopted redis as their preferred cache mechanism, and deprecating the Windows Azure Appfabric cache mechanism they previously supported.  In addition, there are significant performance improvements when managing cache dependencies, some squashed bugs and a method addition to the interface. Read on for the details. Redis support On premise and Azure hosted redis support is the major feature of this release. You can enable this in configuration and instantly be using redis as your cache implementation. Redis has a much richer storage and query mechanism than memcached does, and this enables the library to use redis specific functionality to ensure cache dependency management is taking advantage of this enhanced feature set. To use redis, simply change the following values in the configuration: Obviously, you also need to point the cache provider to the redis instance to use. In the example below, I am pointing to my test instance in windows azure, and specifying the port number (which happens to be the SSL port used): There are some redis specific settings we need to set before connecting to an azure redis instance. We need set whether we are using SSL (in this case yes) and also we need to set the password. If you are using a local instance of redis you may not be required to set the password. For redis in azure, the password is one of your access keys which you can get from the azure portal. Additionally, it is recommended that 'Abort on connect' flag is set to false. In this example, I have also set the connection timeout to 15 seconds: Behind the scenes, the cache adapter relies on the StackExchange.Redis client so any configuration options it supports are supported here. However, do not specify the host in the CacheSpecificData section as they will be ignored. Cache Dependency Management The cache adapter supports a dependency management feature that is available to all cache engines supported. This was previously a generic management component, that ensured this feature worked across all cache types. You can read more about the functionality here. With the introduction of redis support, the cache library utilises a redis specific cache dependency management component to ensure that the rich feature set of redis is taken advantage of for a better performing cache dependency management solution. Typically, if using dependency management, you may have a line like this in your configuration: If you are using redis cache, the redis specific cache dependency manager will be used. When using redis, this is equivalent to specifying: <[...]



CacheAdapter 3.2 Released

Tue, 23 Sep 2014 22:06:00 GMT

I am pleased to announce that my 'Glav.CacheAdapter' package has recently been updated to 3.2 (we skipped 3.1 for a few reasons). Note: For those not familiar with what this package does, you can look at previous posts on the subject here, here, here, here, here and here.  Full source code can be downloaded from here. There are a bunch of bugs and issues addressed in this release, primarily related to user feedback and issues raised on the issue register here. Important note on package structure One of the issues relates to how the CacheAdapter is now packaged. Previous version were packaged as a single Glav.CacheAdapter package on nuget. This package contained a readme.txt and example code which was also included when you installed or updated the package. On larger projects where this package may been installed into multiple projects, updating all those packages meant that each project would get the readme.txt and example code added to the project. This was a pain as it meant you usually had to manually delete these from all the projects. Now the single package has been broken into 2 separate packages: Glav.CacheAdapter.Core: Contains just the Glav.CacheAdapter assembly and references to dependent packages. Glav.CacheAdapter: Contains a reference to Glav.CacheAdapter.Core and also contains the readme.txt, example code and app/web.config additions. This now means you can install Glav.CacheAdapter package and get the included configuration changes, example code, readme.txt etc but when updating the functionality to a latest release, you only need to update Glav.CacheAdapter.Core which means no project additions or config changes are made, just an assembly update. Much nicer for larger solutions with multiple project references. Other changes in this release Support for SecurityMode.None setting for Appfabric caching (Issue #20) Support for LocalCaching configuration values (Issue #21)For local caching support, you can specify the following in the cache specific data:         Note: DefaultTimeout value specifies amount of time in seconds. Support for programmatically setting the configuration and initialising the cache. (Issue #19) The CacheConfig object is easily accessible via //If you want to programmatically alter the configured values for the cache, you can // use the commented section below as an example CacheConfig config = new CacheConfig(); config.CacheToUse = CacheTypes.MemoryCache; AppServices.SetConfig(config); // Alternatively, you can use the commented method below to set the logging implementation, // configuration settings, and resolver to use when determining the ICacheProvider // implementation. AppServices.PreStartInitialise(null, config); Splitting Glav.CacheAdapter package into 2 packages - Glav.CacheAdapter.Core & Glav.CacheAdapter.  (Issue #13)Details listed above Merged in changes from Darren Boon's cache optimisation to ensure data only added to cache when its enabled and non null. Involved some code cleanup as this branch had been partially merged prior. Merged changes from https://bitbucket.org/c0dem0nkee/cacheadapter/branch/default to destroy cache when using local cache on calling ClearAll() method. As usual, if you have, or have found, any issues please report them on the issue register here.[...]



CacheAdapter 3.0 Released

Sun, 21 Jul 2013 06:52:31 GMT

I am happy to announce that CacheAdapter Version 3.0 has been released. You can grab the nuget package from here or you can download the source code from here For those not familiar with what the CacheAdapter is, you can read my past posts here, here, here, here and here but basically, you get nice consistent API around multiple caching engines. Currently CacheAdapter supports in memory cache, ASP.Net web cache, memcached, and Windows Azure Appfabric cache. You get to a program against a clean easy to use API and can choose your caching mechanism using simple configuration. Change between using ASP.Net web cache to a distributed cache such as memcached or Appfabric with no code change,just some config. Changes in Version 3.0 This latest version incorporates one new major feature, a much requested API addition and some changes to configuration. Cache Dependencies CacheAdapter now supports the concept of cache dependencies. This currently is rudimentary support for invalidating other cache items automatically, when you invalidate a cache item that is linked to the other items. That is, you can specify that one cache item is dependent on other cache items. When a cache item is invalidated, their dependencies are automatically invalidated for you. The diagram below illustrates this. In the scenario below, when ‘ParentCacheKey’ is invalidated/cleared, then all its dependent items are also removed for you. If only ‘ChildItem1’ was invalidated, then only ‘SubChild1’ and ‘SubChild2’ would be invalidated for you. This is supported across all cache mechanisms and will include any cache mechanisms subsequently added to the supported list of cache engines.Later in this blog post (see ‘Details of Cache Dependency features’ below), I will detail how to accomplish that using the CacheAdpter API. Clearing the Cache Many users have asked for a way to programmatically clear the cache. There is now a ‘ClearAll’ API method on the ICacheAdapter interface which will do just that. Please note that Windows Azure Appfabric cache does not support clearing the cache. This may change in the future however the current implementation will attempt to iterate over the regions and clear each region in the cache but will catch any exceptions during this phase, which Windows Azure Appfabric caching will throw. Local versions of Appfabric should work fine. Cache Feature Detection - ICacheFeatureSupport A new interface is available on the ICacheProvider called ICacheFeatureSupport which is accessed via the FeatureSupport property. This is fairly rudimentary for now but provides an indicator as to whether the cache engine supports clearing the cache. This means you can take alternative actions based on certain features. As an example: if (cacheProvider.FeatureSupport.SupportsClearingCacheContents()) { // Do something } Configuration Changes I felt it was time to clean up the way the CacheAdapter looks for its configuration settings. I wasn’t entirely happy the way it currently works so now the CacheAdapter supports looking for all its configuration values in the section of your configuration file. All the current configuration keys are still there but if using the approach, you simply prefix the key names with “Cache.”. An example is probably best. Previously you used the configuration section to define configuration values as shown here: memory True



Owin, Katana and getting started

Thu, 04 Apr 2013 22:19:22 GMT

Introduction This article describes an emerging open source specification, referred to as Owin – An Open Web Interface for .Net, what it is, and how this might be beneficial to the .Net technology stack. It will also provide a brief, concise look at how to get started with Owin and related technologies. In my initial investigations with Owin and how to play with it, I found lots of conflicting documentation and unclear ways of how to make it work and why. This article is an attempt to better articulate those steps so that you may save yourself lots of time and come to a clearer understanding in a much shorter time than I did. Note: The code for this article can be download from here. First, What is it? Just in case you are not aware, a community driven project referred to as Owin is really gaining traction. Owin is a simple specification that describes how components in a HTTP pipeline should communicate. The details of what is in the communication between components is specific to each component, however there are some common elements to each. Owin itself is not a technology, just a specification. I am going to gloss over many details here in order to remain concise, but at its very core, Owin describes 1 main component which is the following interface: Func,Task> This is a function that accepts a simple dictionary of objects, keyed by a string identifier. The function itself returns a task. The object in the dictionary in this instance will vary depending on what the key is referring to. More often, you will see it referenced like this: using AppFunc = Func,Task> An actual implementation might look like this: public Task Invoke(IDictionary environment) { var someObject = environment[“some.key”] as SomeObject; // etc… } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is essentially how the “environment” or information about the HTTP context is passed around. Looking at the environment argument of this method, you could interrogate it as follows: var httpRequestPath = environment[“owin.RequestPath”] as string; Console.WriteLine(“Your Path is: [{0}]”,httpRequestPath); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } if a HttpRequest was being made to ‘http://localhost:8080/Content/Main.css” then the output would be: Your Path is [/Content/Main.css] In addition, while not part of the spec, the IAppBuilder interface is also core to the functioning of an Ow[...]



Debugging Pain–End to End

Wed, 20 Feb 2013 21:18:47 GMT

We had an issue recently that caused us some time and quite a lot of head scratching. We had made some relatively minor changes to our product and performed a release into staging for testing. We released our main web application as well as our custom built support tool (also a web app). After a little bit of testing from our QA team, a few bugs were uncovered. One where a response to a cancel action seemingly was not actioned, and an issue where a timeout occurred on a few requests. Nothing too huge and certainly seemed fixable. Off to work The timeouts “seemed” to be data specific and possibly because of our 3rd party web service call being made. It seemed to be only occurring in our support tool, the main web app was not affected. Since the main web app is the priority, I looked at the “cancel” issue not working. It seemed that the cancel request was being made to our server (via an ajax call) but never returning from said call. This looked very similar to our issue with the support tool. A little further investigation showed that both the support tool and our main web app were issuing ajax requests to a few action methods (we use ASP.Net MVC 4.5) and never returning. Ever. I tried recycling the application pool in IIS. This worked for a brief period, then requests to a few particular requests to action methods were not returning. Web pages from other action methods and even other ajax requests were working fine so at least we knew what surface area we had to look at. Looking at the requests via Chrome, we could see the request in a constant pending state, never satisfied. We began looking at the server. We instituted some page tracing, looked at event logs and also looked at the Internet Information Server logs. Nothing. Nada. We could see the successful requests come in, but these pending requests were not logged. Fiddler showed they were definitely outgoing, but the web server showed nothing. Using the Internet Information Service management console, we looked at the current worker processes which is available when clicking on the root node within the IIS Management console. We could see our application pool and right clicking on this allowed us to view current requests, and there they were, all waiting to be executed, and waiting…..and waiting. What’s the hold up? So what was causing our requests to get backlogged? We tried going into the code and removing calls to external services and trying to isolate the problem areas. All of this was not reproducible locally, nor in any other environment. Eventually trying to isolate the cause led us to removing everything from the controller actions apart from simple Thread.Sleep.While this worked and the problem did not present, we were in no way closer as it was still in potentially any number of code paths. Take a dump A colleague suggested using DebugDiag (a free diagnostic tool from Microsoft) to look at memory dumps of the process. So that is what we did. Using DebugDiag, we extracted a memory dump of the process. DebugDiag has some really nice features, one of which is to execute predefined scripts in an attempt to diagnose any issues and present a summary of what was found and also has a nice wizard based set of of steps to get you up and running quickly. We chose to monitor for performance: and also for HTTP Response time: We then added the specific URL’s we were monitoring. We also chose what kind of dumps to take, in this case web application pools: We decided on the time frequency (we chose every 10 seconds) and a maximum of 10 Full dumps:   After that, we set the dump path, named and activated the rule, and we good to go. With the requests already built up in the q[...]



CacheAdapter minor update to version 2.5.1

Wed, 23 Jan 2013 03:42:57 GMT

For information on previous releases, please see here.

My cache adapter library has been updated to version 2.5.1. This one contains only minor changes which are:

  • Use of the Windows Azure Nuget package instead of the assemblies directly referenced.
  • Use of the ‘Put’ verb instead of ‘Add’ when using AppFabric to prevent errors when calling ‘Add’ more than once.
  • Minor updates to the example code to make it clearer that the code is indeed working as expected.

Thanks to contributor Darren Boon for forking and providing the Azure updates for this release.




Writing an ASP.Net Web based TFS Client

Tue, 27 Nov 2012 01:53:17 GMT

So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the element as shown below: With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” [...]



CacheAdapter 2.5–Memcached revised

Wed, 16 May 2012 23:14:51 GMT

Note: For more information around the CacheAdapter, see my previous posts here, here, here and here

You may have noticed a number of updates to the CacheAdapter package on nuget as of late. These are all related to performance and stability improvements for the memcached component of the library.

However it became apparent that I needed more performance for the memcached support to make the library truly useful and perform as best as can be expected.

I started playing with optimising serialisation processes and really optimising the socket connections used to communicate with the memcached instance. As anybody who has worked with sockets before may tell you, you quickly start to look at pooling your connections to ensure you do not have to go about recreating a connection every time you want to communicate, especially if we are storing or retrieving items in a cache as this can be very frequent.

So I started implementing a pooling mechanism to increase the performance. I got to improving it to a reasonable extent, then found I would hit a hurdle where performance could be increased further but it was becoming harder and more complex to retain stability. I became lost in trying to fix the problem when a solution had already been in place for a long time.

It was about then I decided it best to simply take a dependency on the most excellent Enyim memcached client. It is fast. Really fast, and blew away everything I had done in terms of performance. So that is the essence of this update. The memcached support in the cache adapter comes courtesy of Enyim. Enjoy the extra speed that comes with it.

Note: There are no other updates to any of the other caching implementations.




CacheAdapter 2.4 – Bug fixes and minor functional update

Wed, 28 Mar 2012 20:22:54 GMT

Note: If you are unfamiliar with the CacheAdapter library and what it does, you can read all about its awesome ability to utilise memory, Asp.Net Web, Windows Azure AppFabric and memcached caching implementations via a single unified, simple to use API from here and here.. The CacheAdapter library is receiving an update to version 2.4 and is currently available on Nuget here. Update: The CacheAdapter has actualy just had a minor revision to 2.4.1. This significantly increases the performance and reliability in memcached scenario under more extreme loads. General to moderate usage wont see any noticeable difference though. Bugs This latest version fixes a big that is only present in the memcached implementation and is only seen in rare, intermittent times (making i particularly hard to find). The bug is where a cache node would be removed from the farm when errors in deserialization of cached objects would occur due to serialised data not being read from the stream in entirety. The code also contains enhancements to better surface serialization exceptions to aid in the debugging process. This is also specifically targeted at the memcached implementation. This is important when moving from something like memory or Asp.Web caching mechanisms to memcached where the serialization rules are not as lenient. There are a few other minor bug fixes, code cleanup and a little refactoring. Minor feature addition In addition to this bug fix, many people have asked for a single setting to either enable or disable the cache.In this version, you can disable the cache by setting the IsCacheEnabled flag to false in the application configuration file. Something like the example below: memcached localhost:11211 False Your reasons to use this feature may vary (perhaps some performance testing or problem diagnosis). At any rate, disabling the cache will cause every attempt to retrieve data from the cache, resulting in a cache miss and returning null. If you are using the ICacheProvider with the delegate/Func syntax to populate the cache, this delegate method will get executed every single time. For example, when the cache is disabled, the following delegate/Func code will be executed every time: var data1 = cacheProvider.Get("cache-key", DateTime.Now.AddHours(1), () => { // With the cache disabled, this data access code is executed every attempt to // get this data via the CacheProvider. var someData = new SomeData() { SomeText = "cache example1", SomeNumber = 1 }; return someData; }); One final note: If you access the cache directly via the ICache instance, instead of the higher level ICacheProvider API, you bypass this setting and still access the underlying cache implementation. Only the ICacheProvider instance observes the IsCacheEnabled setting. Thanks to those individuals who have used this library and provided feedback. Ifyou have any suggestions or ideas, please submit them to the issue register on bitbucket (which is where you can grab all the source code from too)[...]



ASP.NET Web Api–Request/Response/Usage Logging

Sun, 26 Feb 2012 10:26:00 GMT

For part 1 of this series of blog posts on ASP.Net Web Api – see here. Introduction In Part 2 of this blog post series, we deal with the common requirement of logging, or recording the usage of your Web Api. That is, recording when an Api call was made, what information was passed in via the Api call, and what response was issued to the consumer. My new Api is awesome – but is it being used? So you have a shiny new Api all built on the latest bits from Microsoft, the ASP.Net Web Api that shipped with the Beta of MVC 4. A common need for a lot of Api’s is to log usage of each call, to record what came in as part of the request, to also record what response was sent to the consumer. This kind of information is really handy for debugging.Not just for you, but also for your customers as well. Being able to backtrack over the history of Api calls to determine the full context of some problem for a consumer can save a lot of time and guesswork. So, how do we do this with the Web Api? Glad you asked. Determining what kind of injection point to utilise We have a few options when it comes to determining where to inject our custom classes/code to best intercept incoming and outgoing data. To log all incoming and outgoing data, the most applicable interception point is a System.Net.Http.DelegatingHandler. These classes are message handlers that apply to all messages for all requests and can be chained together to have multiple handlers registered within the message handling pipeline. For example, in addition to Api usage logging, you may want to provide a generic authentication handler that checks for the presence of some authentication key. I could have chosen to use filters however these are typically more scoped to the action itself. I could potentially use model binders but these are relatively late in the processing cycle and not generic enough (plus it would take some potentially unintuitive code to make it work as we would want).. Enough with theory, show me some code So, our initial Api usage logger will inherit from the DelegatingHandler class (in the System.Net.Http namespace) and provide its own implementation. public class ApiUsageLogger : DelegatingHandler { protected override System.Threading.Tasks.Task SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { //your stuff goes here... } } We only really need to override one method, the ‘SendAsync' method which returns a Task. Task objects play heavily in the new Web Api and allow asynchronous processing to be used as the primary processing paradigm allowing better scalability and processing utilisation. Since everything is asynchronous via the use of Tasks, we need to ensure we play by the right rules and utilise the asynchronous capabilities of the Task class. A more fleshed out version of our method is shown below: protected override System.Threading.Tasks.Task SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { // Extract the request logging information var requestLoggingInfo = ExtractLoggingInfoFromRequest(request); // Execute the request, this does not block var response = base.SendAsync(request, cancellationToken); // Log the incoming data to the database WriteLoggingInfo(requestLoggingInfo); // Once the response is processed asynchronously, log the response data // to the database response.ContinueWith((responseMsg) => { // Extract the response logging [...]



MVC4 and Web Api– make an Api the way you always wanted–Part 1

Sat, 18 Feb 2012 01:33:49 GMT

Update: Part 2 of this series is available here. ASP.NET MVC is a huge success as framework and just recently, ASP.NET MVC4 Beta was released. While there are many changes in this release, I want to specifically focus on the WebApi portion of this. In previous flavours of MVC, many people wanted to develop REST Api’s and didn’t really want to use WCF for this purpose. The team at Microsoft was progressing along with a library called WcfWebApi. This library used a very WCF’esque way of defining an Api. This mean defining an interface and decorating the interface with the appropriate WCF attributes to constitute your Api endpoints. However, a lot of people don’t like to use a WCF style of programming, and are really comfortable in the MVC world. Especially when you can construct similar REST endpoints in MVC with extreme ease. This is exactly what a lot of people who wanted a REST Api did. They simply used ASP.NET MVC to define a route and handled the payload themselves via standard MVC controllers. What the WcfWebApi did quite well though was things like content negotiation (do you want XML or json?), auto help generation, message/payload inspection, transformation, parameter population and a lot of other things. Microsoft have recognised all this and decided to mix it all together. Take the good bits of WcfWebApi and the Api development approach of MVC, and created an Api framework to easily expose your REST endpoints and retain the MVC ease of development. This is the WebApi and it supersedes the WcfWebApi (which will not continue to be developed). So how do you make a REST sandwich now? Well, best to start with the code. Firstly, lets define a route for our REST Api. In the Global.asax, we may have something like this: public static void RegisterApis(HttpConfiguration config) { config.MessageHandlers.Add(new ApiUsageLogger()); config.Routes.MapHttpRoute("contacts-route", "contacts", new { controller = "contacts" }); } Ignore the MessageHandler line for now, we will get back to that in a later post. You can see we are defining/mapping a Http route. The first parameter is the unique route name, second is the route template to use for processing these route requests. This means that when I get a request like http://MyHost/Contacts, this route will get a match. The third parameter  the default properties. In this case, I am simply stating that the controller to use is the ContactsController. This is shown below. public class ContactsController : ApiController { public List GetContacts() { // do stuff.... } } You can see that we have a class that inherits from a new ApiController class. We have a simple controller action that returns a list of ContactSummary objects. This is (while overly simplistic) a fully fledged REST Api that supports content negotiation and as such will respond with json or XML if it is requested by the client. The WebApi fully supports your typical Http verbs such as GET, PUT, POST, DELETE etc… and in the example above, the fact that I have an ApiController action method prefixed with a ‘Get’ means it will support the GET verb. If I had a method prefixed with ‘Post’ then that action method, by convention, would support the POST verb. You can optionally decorate the methods with [System.Web.Http.HttpGet], [System.Web.Http.HttpPost] attributes to achieve the same effect. And OData as well? Want to support OData? Then simply return an IQueryable. Your Web Api will support querying [...]