Subscribe: Glavs Blog
Added By: Feedage Forager Feedage Grade B rated
Language: English
api  azure  cache  cacheadapter  code  configuration  glav cacheadapter  glav  memcached  net  package  redis  support  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Glavs Blog

Glavs Blog

The dotDude of .Net


"Easy Auth" / App Service authentication using multiple providers

Wed, 07 Dec 2016 04:34:00 GMT

App Service authentication is a feature in Microsoft Azure that allows extremely easy setup of authentication using either: Active Directory Google Facebook Twitter Microsoft Account (MSA)   It is often referred to as "Easy Auth". This is due to how easy it is to setup and integrate into your app. Effectively, no code required (at least for authentication).   Lets say you have developed a web site and are deploying it to Azure as an App Service. Without doing anything in code, you can enable one of these forms of authentication from within the Azure portal. You can enable this feature by using the "Authentication/Authorization" option in the list of settings associated with an App Service. Any anonymous requests that come through are automatically caught by the pipeline that Azure injects, and an authentication process with the configured provider is performed. For example, if Google is specified and configured, then when an un-authenticated request comes in, the user is redirected to google for authentication, then redirected back to the site once this is successful.   This works great but has a few shortcomings. You can only specify 1 provider to use. For example, you can use Active Directory, or Google, but not both. Local development. You cannot enable this option locally so how do you develop locally with security enabled? Problem 1 - Using multiple providers While this is not supported out of the box, you can make it work. In order to do this, you will need to do the following: Ensure that the "Action to take when request is not authenticated" option in the azure portal is set to "Allow anonymous requests". Ensure your application requires authenticated users with a redirect to a login page. There are a few ways to do this. As an example, you can include the "Microsoft.Owin.Security " nuget package and place the following code in your Owin startup class:   public void Configuration(IAppBuilder app) {     app.UseCookieAuthentication(new CookieAuthenticationOptions     {         AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,         LoginPath = new PathString("/Login")                         }); } Provide a login page with options to the user about which provider to use when authenticating. This page should not require authentication.   These options should link directly to each provider instead of allowing azure to automatically redirect. The way to do that is by using the following link format: https://{your-host}/.auth/login/{provider} The available providers to use are "aad","google","facebook" and "microsoftaccount". As an example, let's say we have created an AppService that is hosted at The Url for the 'Login with Active Directory' button would be and for google it would be Further to this, we need to include an extra parameter to instruct where the process should redirect to after successful authentication. We can do this with the 'post_login_redirect_uri' parameter. Without this, the process will redirect to a default 'Authentication Successful' page with a link to go back to the site. So the final Url (in the case of Active directory) will look like: With the two options now linking to each providers login option, and with a post login redirect in place, we are good to go. Problem 2 - Local Development  Obviously, when developing locally, we are not hosting in azure and so cannot take advantage of the "Easy Auth" functionality within the azure portal. In order to get around this, you can provide a further option to login locally, but that is only displayed when in local development mode. To do this, you can: Create [...]

CacheAdapter 4.2.0 Released

Fri, 28 Oct 2016 05:26:00 GMT

The latest version of cache adapter has now been released and is available via Nuget or you can download the source code here. Note: For those who don’t know what cache adapter is, you can check out the wiki here or you can look at some previous blog posts here which also has links to other posts. The nuget packages for the core component and the package with all configuration and code examples are: Glav.CacheAdapter.Core - binary only ( ) Glav.CacheAdatper - binary, configuration examples, code examples. ( ) Bitbucket code repository - complete source code for the CacheAdapter ( ) What is in this release This is only a relatively minor release but does contain some useful additions, as well as addressing some outstanding issues. The list of items addressed in this release are: Addition of a licence file Issue #49 Fixing a bug where multiple configuration instances are created Issue #40 Fluent configuration Issue #50 Fluent Configuration Previously, configuration of the cache adapter in code wa a little tedious. Now however, you can do this: using Glav.CacheAdapter.Helpers; var provider = CacheConfig.Create()     .UseMemcachedCache()     .UsingDistributedServerNode("")     .BuildCacheProviderWithTraceLogging(); provider.Get("cacheKey", DateTime.Now.AddHours(1), () =>     {         return GetMyDataFromTheStore();     }); Breaking Change: Normalised configuration There was previously 2 ways of expressing configuration in the configuration file. Due to a configuration system overhaul, this has now been changed to support only one. Specifically, the configuration section option has been removed and now only the method of configuration is supported. So if you had something like: memcached localhost:11211 That should now be changed to: That is about it for the changes. Hope you find them useful. Feel free to let me know any suggestions or even perhaps create a pull request for any features you think are valuable additions. [...]

Website performance – know what’s coming

Mon, 29 Aug 2016 08:02:12 GMT

If you live in Australia (and perhaps even outside of Oz), there has been a lot of attention on the Australian Bureau of Statistics (ABS) in regards to the Census 2016 website and its subsequent failure to adequately handle the peak usage times it was supposedly designed for. It was also reported that a Denial of Service attack was launched against the site in order to cause an outage, which obviously worked. One can argue where the fault lay, and no doubt there are blame games being played out right now. IBM were tasked with the delivery of the website and they used a company called "RevolutionIT" to perform load testing. It is my opinion that while RevolutionIT performed the load testing, IBM is indeed the one who should wear all the blame. Load testing and performance testing simply provide metrics for a system under given situations. IBM would need to analyse those metrics to ensure that aspects of the system are performing as expected. This is not just a one off task either. Ideally it should be run repeatedly to ensure changes being applied are having the desired effect.Typically, a load test run would be executed against a single instance of the website to ascertain the baseline performance for a single unit of infrastructure, with a "close to production version" of the data store it was using.Once this single unit of measure is found, it is a relatively easy exercise to extrapolate how much infrastructure is required to handle a given load. More testing can then be performed with more machines to validate this claim. This a very simplistic view of the situation and there is far more variables to consider but in essence, baseline your performance then iterate upon that.Infrastructure is just one part of that picture though. A very important one of course, but not the only one. Care must be taken to ensure the application is designed and architected in such a way to make it resilient to failure, in addition to performing well. Take a look at the graph below. Note that this is not actual traffic on census night but rather my interpretation of what may have been a factor. The orange bars represent what traffic was expected during that day, with the blue representing what traffic actually occurred on the day. Again, purely fictional in terms of actual values but not too far from what probably occurred. At popular times of the day convenient for the Australian public, people tried to access the site and fill it in.A naïve expectation is to think that people will be nice net citizens and plod along, happy to evenly distribute themselves across the day, with mild peaks. A more realistic expectation is akin to driving in heavy traffic. People don’t want to go slower and play nice, they want to go faster. See a gap in the traffic? Zoom in, cut others off and speed down the road to your advantage. This is the same as hitting F5 in your browser attempting to get the site to load. Going too slowly, hit F5 again and again. Your problem is now even worse than estimated expectations as each person can triple their requests attempting to be made.To avoid these situations, that you need to have a good understanding of your customers usage habits. Know or learn the typical habits of your customers to ensure you get a clear view of how your system will be used. Will it be used first thing in the morning while people are drinking their first coffee? Maybe there will be multiple peak times during morning, lunch and the evening? Perhaps the system will remain relatively unused for most days except friday to sunday?In a performance testing scenario, especially in a situation similar to Census where you know in advance you are going to get a lot of sustained traffic at particular times, you need to plan for the worst. If you have some degree of metrics around what the load might be, ensure your systems can handle far more than expected. At the very least, ensure that should you encounter unexpected or extremely heavy traffic, your systems can scale, and can fail with grace. This m[...]

Cache Adapter 4.1 Released

Thu, 03 Mar 2016 07:28:26 GMT

The latest version of cache adapter has now been released and is available via Nuget or you can download the source code here.Note: For those who don’t know what cache adapter is, you can check out the wiki here or you can look at some previous blog posts here which also has links to other posts.The nuget packages for the core component and the package with all configuration and code examples are:Glav.CacheAdapter.Core - binary only ( )Glav.CacheAdatper - binary, configuration examples, code examples. ( )Bitbucket code repository - complete source code for the CacheAdapter ( )Please note: This release would not have been possible without contributions from the community, most notably Katrash. Your contributions are much appreciated even if my response and eventual action is somewhat slow.Updated: I made an error with Glav.CacheAdapter.Core 4.1.0 package which included the readme, config transforms etc. This has now been fixed with the Nuget packages 4.1.1. There is no assembly change in these oackages, which remains at 4.1.0.What is in this releaseThis is only a relatively minor release but does contain some useful additions, as well as addressing some outstanding issues. The list of items addressed in this release are:Targetting .Net 4.5.Async method support.Configurable LoggingTargetting .Net 4.5CacheAdapter nuget package now only targets .Net 4.5 to allow the removal of Microsoft.Bcl packages from the dependency list which caused package bloat and was a much requested change for many users. These packages were only required when targetting versions of .Net prior to .Net 4.5 as the StackExchange.Redis package had these listed as dependencies. Glav.CacheAdapter depends upon StackExchange.Redis for its Redis support.Async Method SupportAsync method support is now available for all ‘Get’ CacheProvider operations.For all ‘Get’ operations that also implicitly add data to the cache when a cache miss occurs, there are now the async  equivalents of these methods that will return a ‘Task’.So if previously you did something like:var data = cacheProvider.Get(cacheKey, DateTime.Now.AddDays(1), () => {   return dataStore.PerformQuery();};you can now use the async form of that method that returns a task. Using the above example, this would now become:var data = await cacheProvider.GetAsync(cacheKey, DateTime.Now.AddDays(1), () => {   return dataStore.PerformQueryAsync();};Configurable logging. You can now disable/enable logging, log only errors, or log informational items which also includes errors. Previously this was left up to consumers to implement their own ILogging interface to allow whatever degree of confurability they want. As of this release, you now get the ability to perform basic configuration on the included logging component. This is specifed in configuration (like everything else), in the form: – This logs everything and is the default. This is how prior versions operated. – As the name suggests, only error information is logged. – Nothing is logged.That is about it for the changes. Hope you find them useful. Feel free to let me know any suggestions or even perhaps create a pull request for any features you think are valuable additions.[...]

Getting Docker running on my windows system

Thu, 10 Dec 2015 23:10:10 GMT

IntroThis post is about some of the issues I had installing the latest docker toolbox and how I went about solving them to be able to finally get docker working on my windows system.For those not familiar with docker and what it is/does, I suggest going here and reading up on it a bit.For the record, I have a fairly standard Windows 10 laptop, which was upgraded from Windows 8.1. Gigabyte P34v2 with 16Gb of memory and a 256Gb SSD. Nothing special.Installing dockerTwitter kindly informed me of a great blog post by Scott Hanselman around "Brainstorming development workflows with Docker, Kitematic, VirtualBox, Azure, ASP.NET, and Visual Studio" so I decided to follow the steps and give it a shot.I started following the steps involved, although I initially missed the part about disabling Hyper-V. So disable Hyper-V and then reboot. If you are doing Cordova/Ionic development and using some of the emulators accompanying visual studio that require Hyper-V, this may be somewhat inconvenient for you.Everything seemed to initially install fine. Docker installed all of its components including VirtualBox.Next step is to double click the 'Docker Quickstart terminal' to ensure everything is installed as expected.Problem 1: Docker virtual machine will not start.Docker terminal starts up and begins setting up the environment (creating SSH keys etc), creating the virtual machine within VirtualBox and starting that machine. However, the virtual machine simply would not start and the docker terminal reported the error and stopped.I loaded up KiteMatic which is the other utility application that the Docker toolbox installs to see if that could help. It has an option to delete and re-create the VM. So I went and did that, but to no avail. The VM gets deleted, recreated but will not start.I tried uninstalling and re-installing the docker toolbox, realised the VM remains in VirtualBox, so deleted that VM manually (it was named ‘default’), then un-installed and re-installed again but unfortunately no go.I loaded VirtualBox and tried to start the machine manually but no go. A dialog was shown with a 'Details' button which revealed the following error:Failed to open/create the internal network 'HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter' (VERR_INTNET_FLT_IF_NOT_FOUND).Hmm, not that informative but plonking the error code into google/bing revealed many posts of others having similar issues.Seems that the issue was within VirtualBox itself (note: I had version 5.0.10) and that any virtual machine that had a ‘Host-only’ network defined for that VM simply will not start. When the docker toolbox is installed it creates a virtual machine (typically named ‘default’) and also sets up a Host-only network adapter to allow the client tools to communicate with the VM. In addition, un-installing and re-installing the docker tools can create multiple instances of the Host-only adapter. I tried using a few variants but to no avail. it simply would not start.It seemed it was some networking issue with the VM setup. I had originally though it was related to bridged connections on my windows box. I had tweeted as much and Scott Hanselman suggested I run:netcfg –dwhich performs a cleanup of your network devices. I did that and not much changed. I did it again and it wiped out every network connection I had. Oh yes, wireless, ethernet all gone. I deleted the adapter from device manager and let windows rediscover my network devices. Small tip: Don’t do this. Obviously Scott was trying to be helpful and I was blindly trying to get things working.After much searchng I stumbled upon this thread which describes the problem I was having. Essentially, when VirtualBox installed under Windows 10 (I don’t know if this happens with all Windows 10 instances but it definitely happens some, also to some Windows 7 or 8 instances) but it uses the NDIS6 network driver. This is the main issue.A resolution is easy enough in the form of ins[...]

CacheAdapter 4.0 released with Redis support.

Thu, 19 Mar 2015 23:31:00 GMT

My 'Glav.CacheAdapter' package has recently had a major release and is now at version 4.0. If you are not familiar with what this package does, you can look at previous posts on the subject here and here. In a nutshell, you can program against a cache interface, and via configuration, switch between ASP.Net web cache, memory cache, Windows Azure Appfabric, memcached and now redis. The nuget packages for the core component and the package with all configuration and code examples are: Glav.CacheAdapter.Core - binary only Glav.CacheAdapter - binary, configuration examples, code examples. Bitbucket code repository - complete source code for the CacheAdapter The major new feature in this release is that we now fully support the Redis data store as a cache engine. Redis is an excellent data store mechanism and is used as a cache store in many circumstances. Windows Azure has recently adopted redis as their preferred cache mechanism, and deprecating the Windows Azure Appfabric cache mechanism they previously supported.  In addition, there are significant performance improvements when managing cache dependencies, some squashed bugs and a method addition to the interface. Read on for the details. Redis support On premise and Azure hosted redis support is the major feature of this release. You can enable this in configuration and instantly be using redis as your cache implementation. Redis has a much richer storage and query mechanism than memcached does, and this enables the library to use redis specific functionality to ensure cache dependency management is taking advantage of this enhanced feature set. To use redis, simply change the following values in the configuration: Obviously, you also need to point the cache provider to the redis instance to use. In the example below, I am pointing to my test instance in windows azure, and specifying the port number (which happens to be the SSL port used): There are some redis specific settings we need to set before connecting to an azure redis instance. We need set whether we are using SSL (in this case yes) and also we need to set the password. If you are using a local instance of redis you may not be required to set the password. For redis in azure, the password is one of your access keys which you can get from the azure portal. Additionally, it is recommended that 'Abort on connect' flag is set to false. In this example, I have also set the connection timeout to 15 seconds: Behind the scenes, the cache adapter relies on the StackExchange.Redis client so any configuration options it supports are supported here. However, do not specify the host in the CacheSpecificData section as they will be ignored. Cache Dependency Management The cache adapter supports a dependency management feature that is available to all cache engines supported. This was previously a generic management component, that ensured this feature worked across all cache types. You can read more about the functionality here. With the introduction of redis support, the cache library utilises a redis specific cache dependency management component to ensure that the rich feature set of redis is taken advantage of for a better performing cache dependency management solution. Typically, if using dependency management, you may have a line like this in your configuration: If you are using redis cache, the redis specific cache dependency manager will be used. When using redis, this is equivalent to specifying:  If using any other cache type, then the gener[...]

CacheAdapter 3.2 Released

Tue, 23 Sep 2014 22:06:00 GMT

I am pleased to announce that my 'Glav.CacheAdapter' package has recently been updated to 3.2 (we skipped 3.1 for a few reasons). Note: For those not familiar with what this package does, you can look at previous posts on the subject here, here, here, here, here and here.  Full source code can be downloaded from here. There are a bunch of bugs and issues addressed in this release, primarily related to user feedback and issues raised on the issue register here. Important note on package structure One of the issues relates to how the CacheAdapter is now packaged. Previous version were packaged as a single Glav.CacheAdapter package on nuget. This package contained a readme.txt and example code which was also included when you installed or updated the package. On larger projects where this package may been installed into multiple projects, updating all those packages meant that each project would get the readme.txt and example code added to the project. This was a pain as it meant you usually had to manually delete these from all the projects. Now the single package has been broken into 2 separate packages: Glav.CacheAdapter.Core: Contains just the Glav.CacheAdapter assembly and references to dependent packages. Glav.CacheAdapter: Contains a reference to Glav.CacheAdapter.Core and also contains the readme.txt, example code and app/web.config additions. This now means you can install Glav.CacheAdapter package and get the included configuration changes, example code, readme.txt etc but when updating the functionality to a latest release, you only need to update Glav.CacheAdapter.Core which means no project additions or config changes are made, just an assembly update. Much nicer for larger solutions with multiple project references. Other changes in this release Support for SecurityMode.None setting for Appfabric caching (Issue #20) Support for LocalCaching configuration values (Issue #21)For local caching support, you can specify the following in the cache specific data:         Note: DefaultTimeout value specifies amount of time in seconds. Support for programmatically setting the configuration and initialising the cache. (Issue #19) The CacheConfig object is easily accessible via //If you want to programmatically alter the configured values for the cache, you can // use the commented section below as an example CacheConfig config = new CacheConfig(); config.CacheToUse = CacheTypes.MemoryCache; AppServices.SetConfig(config); // Alternatively, you can use the commented method below to set the logging implementation, // configuration settings, and resolver to use when determining the ICacheProvider // implementation. AppServices.PreStartInitialise(null, config); Splitting Glav.CacheAdapter package into 2 packages - Glav.CacheAdapter.Core & Glav.CacheAdapter.  (Issue #13)Details listed above Merged in changes from Darren Boon's cache optimisation to ensure data only added to cache when its enabled and non null. Involved some code cleanup as this branch had been partially merged prior. Merged changes from to destroy cache when using local cache on calling ClearAll() method. As usual, if you have, or have found, any issues please report them on the issue register here.[...]

CacheAdapter 3.0 Released

Sun, 21 Jul 2013 06:52:31 GMT

I am happy to announce that CacheAdapter Version 3.0 has been released. You can grab the nuget package from here or you can download the source code from here For those not familiar with what the CacheAdapter is, you can read my past posts here, here, here, here and here but basically, you get nice consistent API around multiple caching engines. Currently CacheAdapter supports in memory cache, ASP.Net web cache, memcached, and Windows Azure Appfabric cache. You get to a program against a clean easy to use API and can choose your caching mechanism using simple configuration. Change between using ASP.Net web cache to a distributed cache such as memcached or Appfabric with no code change,just some config. Changes in Version 3.0 This latest version incorporates one new major feature, a much requested API addition and some changes to configuration. Cache Dependencies CacheAdapter now supports the concept of cache dependencies. This currently is rudimentary support for invalidating other cache items automatically, when you invalidate a cache item that is linked to the other items. That is, you can specify that one cache item is dependent on other cache items. When a cache item is invalidated, their dependencies are automatically invalidated for you. The diagram below illustrates this. In the scenario below, when ‘ParentCacheKey’ is invalidated/cleared, then all its dependent items are also removed for you. If only ‘ChildItem1’ was invalidated, then only ‘SubChild1’ and ‘SubChild2’ would be invalidated for you. This is supported across all cache mechanisms and will include any cache mechanisms subsequently added to the supported list of cache engines.Later in this blog post (see ‘Details of Cache Dependency features’ below), I will detail how to accomplish that using the CacheAdpter API. Clearing the Cache Many users have asked for a way to programmatically clear the cache. There is now a ‘ClearAll’ API method on the ICacheAdapter interface which will do just that. Please note that Windows Azure Appfabric cache does not support clearing the cache. This may change in the future however the current implementation will attempt to iterate over the regions and clear each region in the cache but will catch any exceptions during this phase, which Windows Azure Appfabric caching will throw. Local versions of Appfabric should work fine. Cache Feature Detection - ICacheFeatureSupport A new interface is available on the ICacheProvider called ICacheFeatureSupport which is accessed via the FeatureSupport property. This is fairly rudimentary for now but provides an indicator as to whether the cache engine supports clearing the cache. This means you can take alternative actions based on certain features. As an example: if (cacheProvider.FeatureSupport.SupportsClearingCacheContents()) { // Do something } Configuration Changes I felt it was time to clean up the way the CacheAdapter looks for its configuration settings. I wasn’t entirely happy the way it currently works so now the CacheAdapter supports looking for all its configuration values in the section of your configuration file. All the current configuration keys are still there but if using the approach, you simply prefix the key names with “Cache.”. An example is probably best. Previously you used the configuration section to define configuration values as shown here: memory True

Owin, Katana and getting started

Thu, 04 Apr 2013 22:19:22 GMT

Introduction This article describes an emerging open source specification, referred to as Owin – An Open Web Interface for .Net, what it is, and how this might be beneficial to the .Net technology stack. It will also provide a brief, concise look at how to get started with Owin and related technologies. In my initial investigations with Owin and how to play with it, I found lots of conflicting documentation and unclear ways of how to make it work and why. This article is an attempt to better articulate those steps so that you may save yourself lots of time and come to a clearer understanding in a much shorter time than I did. Note: The code for this article can be download from here. First, What is it? Just in case you are not aware, a community driven project referred to as Owin is really gaining traction. Owin is a simple specification that describes how components in a HTTP pipeline should communicate. The details of what is in the communication between components is specific to each component, however there are some common elements to each. Owin itself is not a technology, just a specification. I am going to gloss over many details here in order to remain concise, but at its very core, Owin describes 1 main component which is the following interface: Func,Task> This is a function that accepts a simple dictionary of objects, keyed by a string identifier. The function itself returns a task. The object in the dictionary in this instance will vary depending on what the key is referring to. More often, you will see it referenced like this: using AppFunc = Func,Task> An actual implementation might look like this: public Task Invoke(IDictionary environment) { var someObject = environment[“some.key”] as SomeObject; // etc… } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is essentially how the “environment” or information about the HTTP context is passed around. Looking at the environment argument of this method, you could interrogate it as follows: var httpRequestPath = environment[“owin.RequestPath”] as string; Console.WriteLine(“Your Path is: [{0}]”,httpRequestPath); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } if a HttpRequest was being made to ‘http://localhost:8080/Content/Main.css” then the output would be: Your Path is [/Content/Main.css] In addition, while not part of the spec, the IAppBuilder interface is also core to the functioning of an Owin module (or ‘middleware’ in Owin speak): public interface IAppBuilder {    [...]

Debugging Pain–End to End

Wed, 20 Feb 2013 21:18:47 GMT

We had an issue recently that caused us some time and quite a lot of head scratching. We had made some relatively minor changes to our product and performed a release into staging for testing. We released our main web application as well as our custom built support tool (also a web app). After a little bit of testing from our QA team, a few bugs were uncovered. One where a response to a cancel action seemingly was not actioned, and an issue where a timeout occurred on a few requests. Nothing too huge and certainly seemed fixable. Off to work The timeouts “seemed” to be data specific and possibly because of our 3rd party web service call being made. It seemed to be only occurring in our support tool, the main web app was not affected. Since the main web app is the priority, I looked at the “cancel” issue not working. It seemed that the cancel request was being made to our server (via an ajax call) but never returning from said call. This looked very similar to our issue with the support tool. A little further investigation showed that both the support tool and our main web app were issuing ajax requests to a few action methods (we use ASP.Net MVC 4.5) and never returning. Ever. I tried recycling the application pool in IIS. This worked for a brief period, then requests to a few particular requests to action methods were not returning. Web pages from other action methods and even other ajax requests were working fine so at least we knew what surface area we had to look at. Looking at the requests via Chrome, we could see the request in a constant pending state, never satisfied. We began looking at the server. We instituted some page tracing, looked at event logs and also looked at the Internet Information Server logs. Nothing. Nada. We could see the successful requests come in, but these pending requests were not logged. Fiddler showed they were definitely outgoing, but the web server showed nothing. Using the Internet Information Service management console, we looked at the current worker processes which is available when clicking on the root node within the IIS Management console. We could see our application pool and right clicking on this allowed us to view current requests, and there they were, all waiting to be executed, and waiting…..and waiting. What’s the hold up? So what was causing our requests to get backlogged? We tried going into the code and removing calls to external services and trying to isolate the problem areas. All of this was not reproducible locally, nor in any other environment. Eventually trying to isolate the cause led us to removing everything from the controller actions apart from simple Thread.Sleep.While this worked and the problem did not present, we were in no way closer as it was still in potentially any number of code paths. Take a dump A colleague suggested using DebugDiag (a free diagnostic tool from Microsoft) to look at memory dumps of the process. So that is what we did. Using DebugDiag, we extracted a memory dump of the process. DebugDiag has some really nice features, one of which is to execute predefined scripts in an attempt to diagnose any issues and present a summary of what was found and also has a nice wizard based set of of steps to get you up and running quickly. We chose to monitor for performance: and also for HTTP Response time: We then added the specific URL’s we were monitoring. We also chose what kind of dumps to take, in this case web application pools: We decided on the time frequency (we chose every 10 seconds) and a maximum of 10 Full dumps:   After that, we set the dump path, named and activated the rule, and we good to go. With the requests already built up in the queue and issuing some more ‘pending’ requests, we could see some memory dumps being taken. A c[...]

CacheAdapter minor update to version 2.5.1

Wed, 23 Jan 2013 03:42:57 GMT

For information on previous releases, please see here.

My cache adapter library has been updated to version 2.5.1. This one contains only minor changes which are:

  • Use of the Windows Azure Nuget package instead of the assemblies directly referenced.
  • Use of the ‘Put’ verb instead of ‘Add’ when using AppFabric to prevent errors when calling ‘Add’ more than once.
  • Minor updates to the example code to make it clearer that the code is indeed working as expected.

Thanks to contributor Darren Boon for forking and providing the Azure updates for this release.

Writing an ASP.Net Web based TFS Client

Tue, 27 Nov 2012 01:53:17 GMT

So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the element as shown below: With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance[...]

CacheAdapter 2.5–Memcached revised

Wed, 16 May 2012 23:14:51 GMT

Note: For more information around the CacheAdapter, see my previous posts here, here, here and here

You may have noticed a number of updates to the CacheAdapter package on nuget as of late. These are all related to performance and stability improvements for the memcached component of the library.

However it became apparent that I needed more performance for the memcached support to make the library truly useful and perform as best as can be expected.

I started playing with optimising serialisation processes and really optimising the socket connections used to communicate with the memcached instance. As anybody who has worked with sockets before may tell you, you quickly start to look at pooling your connections to ensure you do not have to go about recreating a connection every time you want to communicate, especially if we are storing or retrieving items in a cache as this can be very frequent.

So I started implementing a pooling mechanism to increase the performance. I got to improving it to a reasonable extent, then found I would hit a hurdle where performance could be increased further but it was becoming harder and more complex to retain stability. I became lost in trying to fix the problem when a solution had already been in place for a long time.

It was about then I decided it best to simply take a dependency on the most excellent Enyim memcached client. It is fast. Really fast, and blew away everything I had done in terms of performance. So that is the essence of this update. The memcached support in the cache adapter comes courtesy of Enyim. Enjoy the extra speed that comes with it.

Note: There are no other updates to any of the other caching implementations.

CacheAdapter 2.4 – Bug fixes and minor functional update

Wed, 28 Mar 2012 20:22:54 GMT

Note: If you are unfamiliar with the CacheAdapter library and what it does, you can read all about its awesome ability to utilise memory, Asp.Net Web, Windows Azure AppFabric and memcached caching implementations via a single unified, simple to use API from here and here.. The CacheAdapter library is receiving an update to version 2.4 and is currently available on Nuget here. Update: The CacheAdapter has actualy just had a minor revision to 2.4.1. This significantly increases the performance and reliability in memcached scenario under more extreme loads. General to moderate usage wont see any noticeable difference though. Bugs This latest version fixes a big that is only present in the memcached implementation and is only seen in rare, intermittent times (making i particularly hard to find). The bug is where a cache node would be removed from the farm when errors in deserialization of cached objects would occur due to serialised data not being read from the stream in entirety. The code also contains enhancements to better surface serialization exceptions to aid in the debugging process. This is also specifically targeted at the memcached implementation. This is important when moving from something like memory or Asp.Web caching mechanisms to memcached where the serialization rules are not as lenient. There are a few other minor bug fixes, code cleanup and a little refactoring. Minor feature addition In addition to this bug fix, many people have asked for a single setting to either enable or disable the cache.In this version, you can disable the cache by setting the IsCacheEnabled flag to false in the application configuration file. Something like the example below: memcached localhost:11211 False Your reasons to use this feature may vary (perhaps some performance testing or problem diagnosis). At any rate, disabling the cache will cause every attempt to retrieve data from the cache, resulting in a cache miss and returning null. If you are using the ICacheProvider with the delegate/Func syntax to populate the cache, this delegate method will get executed every single time. For example, when the cache is disabled, the following delegate/Func code will be executed every time: var data1 = cacheProvider.Get("cache-key", DateTime.Now.AddHours(1), () => { // With the cache disabled, this data access code is executed every attempt to // get this data via the CacheProvider. var someData = new SomeData() { SomeText = "cache example1", SomeNumber = 1 }; return someData; }); One final note: If you access the cache directly via the ICache instance, instead of the higher level ICacheProvider API, you bypass this setting and still access the underlying cache implementation. Only the ICacheProvider instance observes the IsCacheEnabled setting. Thanks to those individuals who have used this library and provided feedback. Ifyou have any suggestions or ideas, please submit them to the issue register on bitbucket (which is where you can grab all the source code from too)[...]

ASP.NET Web Api–Request/Response/Usage Logging

Sun, 26 Feb 2012 10:26:00 GMT

For part 1 of this series of blog posts on ASP.Net Web Api – see here. Introduction In Part 2 of this blog post series, we deal with the common requirement of logging, or recording the usage of your Web Api. That is, recording when an Api call was made, what information was passed in via the Api call, and what response was issued to the consumer. My new Api is awesome – but is it being used? So you have a shiny new Api all built on the latest bits from Microsoft, the ASP.Net Web Api that shipped with the Beta of MVC 4. A common need for a lot of Api’s is to log usage of each call, to record what came in as part of the request, to also record what response was sent to the consumer. This kind of information is really handy for debugging.Not just for you, but also for your customers as well. Being able to backtrack over the history of Api calls to determine the full context of some problem for a consumer can save a lot of time and guesswork. So, how do we do this with the Web Api? Glad you asked. Determining what kind of injection point to utilise We have a few options when it comes to determining where to inject our custom classes/code to best intercept incoming and outgoing data. To log all incoming and outgoing data, the most applicable interception point is a System.Net.Http.DelegatingHandler. These classes are message handlers that apply to all messages for all requests and can be chained together to have multiple handlers registered within the message handling pipeline. For example, in addition to Api usage logging, you may want to provide a generic authentication handler that checks for the presence of some authentication key. I could have chosen to use filters however these are typically more scoped to the action itself. I could potentially use model binders but these are relatively late in the processing cycle and not generic enough (plus it would take some potentially unintuitive code to make it work as we would want).. Enough with theory, show me some code So, our initial Api usage logger will inherit from the DelegatingHandler class (in the System.Net.Http namespace) and provide its own implementation. public class ApiUsageLogger : DelegatingHandler { protected override System.Threading.Tasks.Task SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { //your stuff goes here... } } We only really need to override one method, the ‘SendAsync' method which returns a Task. Task objects play heavily in the new Web Api and allow asynchronous processing to be used as the primary processing paradigm allowing better scalability and processing utilisation. Since everything is asynchronous via the use of Tasks, we need to ensure we play by the right rules and utilise the asynchronous capabilities of the Task class. A more fleshed out version of our method is shown below: protected override System.Threading.Tasks.Task SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { // Extract the request logging information var requestLoggingInfo = ExtractLoggingInfoFromRequest(request); // Execute the request, this does not block var response = base.SendAsync(request, cancellationToken); // Log the incoming data to the database WriteLoggingInfo(requestLoggingInfo); // Once the response is processed asynchronously, log the response data // to the database response.ContinueWith((responseMsg) => { // Extract the response logging info then persist the information var responseLoggingInfo = ExtractResponseLoggingInfo(requestLogging[...]

MVC4 and Web Api– make an Api the way you always wanted–Part 1

Sat, 18 Feb 2012 01:33:49 GMT

Update: Part 2 of this series is available here. ASP.NET MVC is a huge success as framework and just recently, ASP.NET MVC4 Beta was released. While there are many changes in this release, I want to specifically focus on the WebApi portion of this. In previous flavours of MVC, many people wanted to develop REST Api’s and didn’t really want to use WCF for this purpose. The team at Microsoft was progressing along with a library called WcfWebApi. This library used a very WCF’esque way of defining an Api. This mean defining an interface and decorating the interface with the appropriate WCF attributes to constitute your Api endpoints. However, a lot of people don’t like to use a WCF style of programming, and are really comfortable in the MVC world. Especially when you can construct similar REST endpoints in MVC with extreme ease. This is exactly what a lot of people who wanted a REST Api did. They simply used ASP.NET MVC to define a route and handled the payload themselves via standard MVC controllers. What the WcfWebApi did quite well though was things like content negotiation (do you want XML or json?), auto help generation, message/payload inspection, transformation, parameter population and a lot of other things. Microsoft have recognised all this and decided to mix it all together. Take the good bits of WcfWebApi and the Api development approach of MVC, and created an Api framework to easily expose your REST endpoints and retain the MVC ease of development. This is the WebApi and it supersedes the WcfWebApi (which will not continue to be developed). So how do you make a REST sandwich now? Well, best to start with the code. Firstly, lets define a route for our REST Api. In the Global.asax, we may have something like this: public static void RegisterApis(HttpConfiguration config) { config.MessageHandlers.Add(new ApiUsageLogger()); config.Routes.MapHttpRoute("contacts-route", "contacts", new { controller = "contacts" }); } Ignore the MessageHandler line for now, we will get back to that in a later post. You can see we are defining/mapping a Http route. The first parameter is the unique route name, second is the route template to use for processing these route requests. This means that when I get a request like http://MyHost/Contacts, this route will get a match. The third parameter  the default properties. In this case, I am simply stating that the controller to use is the ContactsController. This is shown below. public class ContactsController : ApiController { public List GetContacts() { // do stuff.... } } You can see that we have a class that inherits from a new ApiController class. We have a simple controller action that returns a list of ContactSummary objects. This is (while overly simplistic) a fully fledged REST Api that supports content negotiation and as such will respond with json or XML if it is requested by the client. The WebApi fully supports your typical Http verbs such as GET, PUT, POST, DELETE etc… and in the example above, the fact that I have an ApiController action method prefixed with a ‘Get’ means it will support the GET verb. If I had a method prefixed with ‘Post’ then that action method, by convention, would support the POST verb. You can optionally decorate the methods with [System.Web.Http.HttpGet], [System.Web.Http.HttpPost] attributes to achieve the same effect. And OData as well? Want to support OData? Then simply return an IQueryable. Your Web Api will support querying via the OData URL conventions. So how do I access this as a consumer? Accessing your shiny new WebApi can n[...]

ScriptHelper now a Nuget package, and managing your Javascript

Sat, 21 Jan 2012 03:45:54 GMT

For a while now I have been looking at different ways of managing javascript inclusion in web pages, and also managing the dependencies that each script inclusion requires. Furthermore, working with ASP.NET MVC and partial views, working with the ever increasing number of dependencies as well as ensuring that each partial view has the script it requires, can be a little challenging. At the very least messy and tedious. Ideally, I’d like every page or partial view to be able to express what scripts it requires and let the framework care about removing duplication, minification etc. So, after looking at the needs of our application, as well as the needs of others I developed the ScriptHelper component which is now available as a Nuget package. I had released an initial version some time ago, details are here. This latest version has the following features: Support for RequiresScriptsDeferred and RenderDeferred methods to allow you to specify script requirements as many times as you like, where ever you like and to have these requirements only rendered to the page when the RenderDeferred method is called. This makes it easy to include the RenderDeferred at the bottom of your master page so all deferred scripts are rendered then. These scripts can be minified and combined at this time as well. So you can do: @ScriptDependencyExtension.ScriptHelper.RequiresScriptsDeferred("jQuery") and maybe somewhere else in the page or in another partial view @ScriptDependencyExtension.ScriptHelper.RequiresScriptsDeferred("jQuery-validate-unobtrusive") @ScriptDependencyExtension.ScriptHelper.RequiresScriptsDeferred("SomeOtherScripts") Then when the: @ScriptDependencyExtension.ScriptHelper.RenderDeferredScripts() is called, all the previously deferred scripts are combined, minified and rendered as one single file inclusion Support for .Less so you can have variables and functions in your CSS. No need to use a .less extension as .Less is automatically invoked if it is enabled for the first time it is required. Note: Although the project source is titled as an MVC script helper, it can also be used within Webforms without issue as it is a simple static method with no reliance on anything from MVC. I just happened to start coding it with an MVC project in mind. Additionally, the helper fully supports CSS and .less CSS semantics and syntax. There are other frameworks that do similar things but I created this one for a few reasons: The code is pretty small and simple. Very easy to modify and extend to suit your own purposes. It currently uses the Microsoft Ajax minifier. If you dont like it, implement the “IScriptProcessingFilter” and replace the current minifier component with your own. I liked the ability to express dependencies explicitly. The number of JS libraries to use grows every day and it can get tricky in large apps to easily see what is required and needed. I didn’t find one that easily implemented the deferred loading scenario, took care of duplicates, and minification, and .less support. Maybe there is now, but I got started on it anyway. Or maybe there is one, but the implementation looked ugly. Either way, I wasn’t satisfied with the current offerings. I like coding stuff Download the Nuget package from here. Download the source from here. For full documentation on ScriptHelper (minus the new features here), see this post. A ReadMe.txt file is included in the package with all configuration details. But .Net 4.5 will include a bundling facility to address some this. Why would I use this? See Scott Gu’s blog po[...]

My Code Kata–A Solution Kata

Sun, 13 Nov 2011 03:53:56 GMT

There are many developers and coders out there who like to do code Kata’s to keep their coding ability up to scratch and to practice their skills. I think it is a good idea. While I like the concept, I find them dead boring and of minimal purpose. Yes, they serve to hone your skills but that’s about it. They are often quite abstract, in that they usually focus on a small problem set requiring specific solutions. It is fair enough as that is how they are designed but again, I find them quite boring. What I personally like to do is go for something a little larger and a little more fun. It takes a little more time and is not as easily executed as a kata though, but it services the same purposes from a practice perspective and allows me to continue to solve some problems that are not directly part of the initial goal. This means I can cover a broader learning range and have a bit more fun. If I am lucky, sometimes they even end up being useful tools. With that in mind, I thought I’d share my current ‘kata’. It is not really a code kata as it is too big. I prefer to think of it as a ‘solution kata’. The code is on bitbucket here. What I wanted to do was create a kind of simplistic virtual world where I can create a player, or a class, stuff it into the world, and see if it survives, and can navigate its way to the exit. Requirements were pretty simple: Must be able to define a map to describe the world using simple X,Y co-ordinates. Z co-ordinates as well if you feel like getting clever. Should have the concept of entrances, exists, solid blocks, and potentially other materials (again if you want to get clever). A coder should be able to easily write a class which will act as an inhabitant of the world. An inhabitant will receive stimulus from the world in the form of surrounding environment and be able to make a decision on action which it passes back to the ‘world’ for processing. At a minimum, an inhabitant will have sight and speed characteristics which determine how far they can ‘see’ in the world, and how fast they can move. Coders who write a really bad ‘inhabitant’ should not adversely affect the rest of world. Should allow multiple inhabitants in the world. So that was the solution I set out to act as a practice solution and a little bit of fun. It had some interesting problems to solve and I figured, if it turned out ok, I could potentially use this as a ‘developer test’ for interviews. Ask a potential coder to write a class for an inhabitant. Show the coder the map they will navigate, but also mention that we will use their code to navigate a map they have not yet seen and a little more complex. I have been playing with solution for a short time now and have it working in basic concepts. Below is a screen shot using a very basic console visualiser that shows the map, boundaries, blocks, entrance, exit and players/inhabitants. The yellow asterisks ‘*’ are the players, green ‘O’ the entrance, purple ‘^’ the exit, maroon/browny ‘#’ are solid blocks. The players can move around at different speeds, knock into each others, and make directional movement decisions based on what they see and who is around them. It has been quite fun to write and it is also quite fun to develop different players to inject into the world. The code below shows a really simple implementation of an inhabitant that can work out what to do based on stimulus from the world. It is pretty simple and just tries to move in some direction if there is nothing blocking the path. public class TestPlayer:Living[...]

Updates to the CacheAdapter Package

Sun, 21 Aug 2011 03:12:52 GMT

Note: This post was originally going to detail the changes from 2.0 to 2.1, however in between the time of release and this post, I released 2.2 so this post will detail all the changes up to 2.2. In the last update to my CacheAdapter library (you can view it here), I mentioned the support for memcached cache engine. That brought the version to 2.0. As previously mentioned, you can get this package from here. I have recently updated the version of the library to 2.1 2.2 which now includes the following changes: The ability provide specific configuration data to a particular cache engine without introducing a bunch of specific configuration tags that are only relevant to a particular cache mechanism. Addition of a CacheSpecificData configuration element. The addition of 2 new simpler API methods to get data from the cache. T Get(DateTime absoluteExpiryDate, Func getData) T Get(TimeSpan slidingExpiryWindow, Func getData) Notice that there is no cache key required to be specified. It is auto generated from the Func delegate. (Thanks to Corneliu for this great idea). So you can write really simple code to get data from the cache like so: var myData = cacheProvider.Get(DateTime.Now.AddSeconds(2), () => { var somedata = new SomeData(); // populate data from somewhere (db, code etc..) return somedata; }); Simplification of the interface Replacing the GetDataToCacheDelegate() delegate with a Func Removal of the bool addToPerRequestCache = false parameter from the Get methods. So why the new configuration element? Well, previous versions of this library supported Windows Azure AppFabric however it was only in a non Azure scenario. When Windows AppFabric is used within Azure, a security mode and authentication key is required. This is easily achieved with config, but I wanted a way that did not introduce redundant elements that were not applicable to other cache mechanisms AND that may be useful for other cache mechanisms if they require it. Basically, a generic way of providing configuration data to cache mechanisms, that could be used by any cache mechanism and not pollute the configuration with lots of items specific to only 1 method. To that end, this release has introduced a “CacheSpecificData” configuration element as shown below: UseSsl=false;SecurityMode=Message;MessageSecurityAuthorizationInfo=your_secure_key_from_azure_dashboard Currently, this element is only of use to AppFabric cache mechanism, but as support for more cache types grow, this item will be used more and more. This element is a simple series of key/value pairs that are separated by a semi-colon. In the example above, the following values are defined: UseSsl = false SecurityMode = Message MessageSecurityAuthorizationInfo = your_secure_key_from_azure_dashboard This equates to the following configuration for Windows AppFabric in Azure (Specifically the element):

CacheAdapter–V2 Now with memcached support

Mon, 04 Jul 2011 04:00:34 GMT

Previously I blogged about my CacheAdapter project that is available as a Nuget package and allows you to program against a single interface implementation, but have support for using Memory, ASP.NET Web or Windows AppFabric cache mechanisms via configuration. I am happy to announce that my CacheAdapter now has support for memcached. Version 2 of the Nuget package is available here. Alternatively, all the source code is available here. What this now means is you can write one line of code to get or store an item in the cache, and that code can automatically support using Windows AppFabric, memcached, MemoryCache or ASP.NET web cache. No code change required. I am particularly happy about having memcached support as it means a few things: Free / Open Source: It is a free, well established open source caching engine that is widely used in many applications. Easy: It is easy to setup. Simple and Cheap: It provides an alternative to using Windows AppFabric. AppFabric can be a little tricky to setup sometimes. If you are using Windows Azure, AppFabric is a simple checkbox BUT you need to pay extra for the privilege of using it based on how much you use the cache service. By contrast, memcached can be installed easily on Azure and requires no extra cost whatsoever. Auto fail over support: In addition, Windows AppFabric has some limitations for a relatively small cache farm. AppFabric utilises a “lead host” to co-ordinate small cache farms of 3 or less cache servers. If the lead host goes down, they all go down. The memcached implementation has no reliance on any single point of failure so if one memcached node fails, requests will automatically be redirected to the nodes that are alive. If the dead node comes back alive again, it is re-introduced to the cache pool after about 1 minute or so. At worst, it results in a few cache misses. Note: This release contains a few namespace changes that may break older versions if you are using the objects directly. Namely AppFabric object support has been moved from the Glav.CacheAdapter.Distributed namespace to the Glav.CacheAdapter.Distributed.AppFabric namespace. This is to allow differentiation from AppFabric and memcached within the distributed namespace. In addition, at the request of some users, I have added a simple ‘Add’ method to the ICacheProvider interface for ease of use. The interface now looks like this: public interface ICacheProvider { T Get(string cacheKey, DateTime absoluteExpiryDate, GetDataToCacheDelegate getData, bool addToPerRequestCache = false) where T : class; T Get(string cacheKey, TimeSpan slidingExpiryWindow, GetDataToCacheDelegate getData, bool addToPerRequestCache = false) where T : class; void InvalidateCacheItem(string cacheKey); void Add(string cacheKey, DateTime absoluteExpiryDate, object dataToAdd); void Add(string cacheKey, TimeSpan slidingExpiryWindow, object dataToAdd); void AddToPerRequestCache(string cacheKey, object dataToAdd); } So that is it. I hope you enjoy using memcached support within the CacheAdapter.[...]