Subscribe: Scott Forsyth's Blog
Added By: Feedage Forager Feedage Grade B rated
Language: English
access  arr  domain  find  ftp  iis ftp  iis  it’s  localtest  request  rewrite  server  site  time  web  week  windows 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Scott Forsyth's Blog

Scott Forsyth's Blog

Postings on IIS, ASP.NET, SQL Server, Webfarms and general system admin.


“Identity not verified” issue in Chrome

Wed, 26 Nov 2014 01:03:00 GMT

I ran into an interesting issue with a site that I’m involved with. This week we started to receive reports of a warning in Chrome that says “Identity not verified”. This is for a site that has been running happily for quite some time. I’m writing this in November 2014. At first the warning only happened for some users, and only in Chrome, making it more difficult to track down. Furthermore, the warning message in Chrome didn’t give many clues as to the real issue. I theorized that it had something to do with a recent update from Chrome, but at first I wasn’t able to prove that. Here’s what the message looked like: You can tell that something is wrong, but you can’t really tell what it is. After some research and eventually becoming suspect that it was Chrome 39 that introduce the issue, we were able to find out the authoritative answer for this. A blog post from the Chrome team in September provides the story: That’s why Chrome will start the process of sunsetting SHA-1 (as used in certificate signatures for HTTPS) with Chrome 39 in November. HTTPS sites whose certificate chains use SHA-1 and are valid past 1 January 2017 will no longer appear to be fully trustworthy in Chrome’s user interface. Notice that the certificate expires September 2017 and that it is signed and hashed with SHA-1. Well, it’s November, we’ve just received the Chrome 39 update, and we have a SHA-1 certificate that is valid past January 1, 2017. That’s us! SHA-1 (secure hash algorithm) has been used for years to sign and hash various objects, including SSL certificates. In 2005 it was determined to be insecure. As a result, Microsoft and Mozilla have both announced plans to stop supporting SHA-1 certificates in 2017. Google has announced that it will do the same in a phased way leading up to 2017. And, that’s exactly what happened here. Chrome has started by showing warnings for SHA-1 certificates which have an expiration date that is past January 1st 2017. So, those with a SHA-1 certificate, who purchased it for a few years in advance, should see this warning in Chrome now. I do appreciate Chrome being proactive in pushing for a more secure internet, but I’m not a fan of them dinging those with longer running certificates well in advance of the 2017 date. I’m currently in the processing of asking our certificate authority to replace our certificate, and I assume that they will do so, but those with longer running certificates also have the most invested into them with the most to lose. They could have waited at least another year before turning on this warning. In any case, it is good to update to SHA-2 and it’s a good end result. This month is the beginning of when this should start to occur, but you should see more of these over the next couple years. Longer running certificates will be hit first, and then as we get closer, other certificates will receive the warnings. IE, Firefox, Safari and other browsers may start to issue warnings leading up to 2017. So, if you have been impacted by this, and you’re a site owner, you should contact your certificate authority and ask for them to reissue your certificate using SHA-2 instead. If you are a web user and you see this warning, you can contact the site owner to make sure that they are aware of the warning. The site is no less secure today than it was last month, but Google is starting to bring awareness to the less secure SHA-1 signed certificates. [...]

IIS: Using Windows Authentication with Minimal Permissions Granted to Disk

Mon, 03 Mar 2014 19:37:55 GMT

I had a question asked me recently regarding Windows auth and NTFS permissions. Here’s the question: When I run an IIS site using Windows Authentication, is there a way to let the Application Pool account access files in disk instead of the logged in user. Ideally, the developer, (or operations) can list the users in web.config only, without the need to add these users to the file permissions or add them to some AD group that has these permissions. Unless you work with these permissions often, it may be difficult to understand the situation well, so let me explain this another way. To have a properly secured server, you should use the principle of least privilege, which essentially says that you should grant only what is absolutely required to enable a service to work, and nothing more. If you do this properly then you should have a tight list of permissions on disk for your website. The difficulty comes when you use Windows authentication—rather than anonymous authentication—to grant access to a website, or a part of a website. What if you want to use IIS’s URL Authorization to manage access rather than using NTFS to manage access. Keep reading and I’ll explain further. First let’s gain more of an understanding on how IIS security works. Basic permissions required for anonymous access When you use anonymous access, a clean setup will implement the following settings: Anonymous authentication for the website should be set to use the Application pool identity Permissions on disk should be granted to: SYSTEM: Full control Administrators: Full control Application pool identity: Read or Modify, depending on your requirements. (It’s useful to the AppPoolIdentity account if you are only accessing local resources) Users required for FTP, web publishing or any other access to content on disk. If you can achieve this (and you should!), you will have a site which is accessible to only the one site, which minimizes the attack surface if another site on the same server is untrusted, or if it is exploited. Basic permissions required for Windows authentication However, what if you want to use Windows auth to grant or deny users access to your site based on their Windows’ accounts. First, you would turn off anonymous authentication so that users are required to authenticate with a Windows account. There are now two options for the authorization part—which is to determine which Windows accounts are allowed and which are not: NTFS: Depend on the NTFS permissions (ACLs) on disk to determine which users have access (e.g. User1 is granted access but User2 isn’t). If you grant a user access on disk then they can access the site. If they do not have access then … well, they don’t have access. URL Authorization: Use IIS and/or ASP.NET’s URL Authorization. By default all users are granted access, but you can change this. Following is an example which has the default Allow All Users removed, and User1, User2, and the Administrators group granted access. These settings are saved to the site’s web.config file, so you can set them manually too, and of course set them at the server level or other places in the IIS configuration hierarchy. When using the URL Authorization method, you would need to grant access on disk to the Windows account (e.g. User1), basically meaning that option #1 and #2 are both used simultaneously. Back to the original question Let’s get back to the original question. What if you don’t want to have to grant the Windows accounts access on disk (#1 above) but you want to use URL Authorization (#2 above) to authorize Windows accounts access to your site? Or, to word it another way, what if you want to use #2, without having to worry about #1 too. Which users require access to disk? This is possible, but let me step aside again and briefly explain how access to disk is determined. The website accesses the disk by using the w3wp.exe [...]

URL Rewrite vs. Redirect; What’s the difference?

Wed, 29 Jan 2014 16:17:00 GMT

IIS URL Rewrite has five different types of actions. They are: Rewrite, Redirect, Custom Response, Abort Request, and None. And if you have ARR (Application Request Routing) installed, then at the server level you’ll also see Route to Server Farm. The two most common actions are the Rewrite and the Redirect. A common question that comes up for people who just start working with URL Rewrite is: what is the difference between a rewrite and a redirect? I remember wondering the same thing. Fortunately there are some very clear cut-and-dry differences between then. Simply put, a redirect is a client-side request to have the web browser go to another URL. This means that the URL that you see in the browser will update to the new URL. A rewrite is a server-side rewrite of the URL before it’s fully processed by IIS. This will not change what you see in the browser because the changes are hidden from the user. Let’s take a look at some other differences between them: Redirect Rewrite Client-side Server-side Changes URL in browser address bar Doesn’t change URL in browser address bar Supports the following redirects: 301 – Permanent 302 – Found 303 – See Other 307 - Temporary Redirect status is non-applicable Useful for search engine optimization by causing the search engine to update the URL. Also useful for search engines by using a friendly URL to hide a messy URL. Example: to in the browser Example: is a friendly URL for Can redirect to the same site or an unrelated site. Generally rewrites to the same site using a relative path, although if you have the ARR module installed you can rewrite to a different site. When you rewrite to a different site, URL Rewrite functions as a reverse proxy. The page request flow is: Browser requests a page Server responds with a redirect status code Browser makes 2nd request to the new URL Server responds to the new URL The page request flow is: Browser requests a page URL Rewrite rewrites the URL and makes request (still within IIS) for the updated page Fiddler is a great tool to see the back and forth between the browser and server. Tools like Process Monitor and native IIS tools are best for getting under the covers. Let’s take a look at some further examples: A redirect changes the URL in the browser, like in the following examples: Add a www to the domain name: Enforce trailing slashes or force to lowercase: Mapping of old to new URL after a site redesign, and let search engines know about it: A rewrite doesn’t change the URL in the browser, but it does change the URL before the request is fully processed by IIS. In the following example the URL is a friendly URL in the browser but the final URL seen by ASP.NET is not as friendly:  Or you can use any part of the URL is a useful way by rewriting the URL. Again, the URL in the browser remains the same while the path or query string behind the scenes is changed: These are just some examples. Hopefully they clarify the difference between a rewrite and a redirect in URL Rewrite for IIS and help you with your URL Rewriting.[...]

Master the Developer and IT Pro job interviews like you would a game boss

Mon, 28 Oct 2013 14:39:00 GMT

Software developer and IT Pro job interviews can be a lot of fun when you’re prepared for them, but they can be scary and overwhelming when you’re not. Since this is regarding your job which you’ll spend 50% of your waking hours at during the week, the more at ease you are with the interviews, the better you’re going to be at landing your ideal job.

Whether you’re a seasoned professional or newer to the field, you will benefit from brushing up on your interview skills.

Ben Weiss from Infusive Solutions showed me a creative resource to help prepare for job interviews. Yes, interviews with an ‘s’. There can be up to four interviews for a single job position (with Human Resources, Senior Developer, Software Manager, and the Chief Technology Officer), each of which require unique preparation and execution.

I don’t have an interview coming up but reading this almost made me want to prepare and see how I would do in an interview today.

(image) What makes this document unique is that it’s a lot of fun as it compares the process to preparing to take on one of the end of level bosses in your favorite game. Think Mario, Zelda or Duke Nukem. Each game boss requires different weapons or skills, making the analogy not only fun, but applicable too.

At the risk of getting off topic, have you seen the size comparison of Sci-Fi’s greatest machines and monsters? If not, check it out.

I was impressed with the document because it’s a nice  easy 17 page read which I felt was the right length to be a complete resource, but it wasn’t too long either. It also has links to other resources.

Ben and the other co-authors are trying to bring awareness to their companies. Ben helps software developers and IT pros with job placement in the Tri-state area. However, he agreed to place a link to the document that doesn’t ask for any of your information. The document is available as a free download without asking for anything in return.

If you have a job interview coming up I encourage you to download the PDF doc and check it out. Save it for later if needed. I believe that it will help you in your preparation in a fun and fully worthwhile way.

You can download it here.

Creating a Reverse Proxy with URL Rewrite for IIS

Thu, 24 Oct 2013 16:18:32 GMT

There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy). Essentially, you can front the internal web server with a friendly URL, even hiding custom ports. For example, consider an internal web server with a URL of You can make that available through a public URL like as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address. This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer. A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place. URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that. Getting Started First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic. After you’ve created your site then open up URL Rewrite at the site level. Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule. If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why. The next and final step of the template asks a few questions. The first textbox asks the name of the internal web server. In our example, it’s This can be any URL, including a subfolder like Don’t include the http or https here. The template assumes that it’s not entered. You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this. Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server. If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to (or whatever you enter) with in all ,
, and tags on your si[...]

Prepending www to 2nd level domain names

Sat, 21 Sep 2013 14:21:22 GMT

A fairly common request for URL Rewrite is to prepend a www to all 2nd level domains, regardless of the domain name. Consider the following domain names: Following is an IIS URL Rewrite rule which will add the www to domain names without requiring you to create multiple rules. It will also maintain the http or https while doing so. This will result in the following URLs: If you want to exclude a particular 2nd level domain name then simply add a negated third condition for the domain name which you want to exclude: [...]

Why is the IIS default app pool recycle set to 1740 minutes?

Sat, 06 Apr 2013 15:06:23 GMT

Microsoft IIS Server has what appears to be an odd default for the application pool recycle time. It defaults to 1740 minutes, which is exactly 29 hours. I’ve always been a bit curious where that default came from. If you’re like me, you may have wondered too. Wonder no longer! While at the MVP Summit this year in Bellevue WA I had the privilege again of talking with the IIS team. Wade Hilmo was there too. Somehow in the conversation a discussion about IIS default settings came up, which included the odd 1740 minutes for the app pool recycle interval. Wade told the story of how the setting came into being, and he granted me permission to share. As you can imagine, many decisions for the large set of products produced by Microsoft come about after a lot of deliberation and research. Others have a geeky and fun origin. This is one of the latter. The 1740 story Back when IIS 6 was being developed—which is the version that introduced application pools—a default needed to be set for the Regular Time Interval when application pools are automatically recycled. Wade suggested 29 hours for the simple reason that it’s the smallest prime number over 24. He wanted a staggered and non-repeating pattern that doesn’t occur more frequently than once per day. In Wade’s words: “you don’t get a resonate pattern”. The default has been 1740 minutes (29 hours) ever since! That’s a fun little tidbit on the origin of the 1740. How about in your environment though? What is a good default? Practical guidelines First off, I think 29 hours is a good default. For a situation where you don’t know the environment, which is the case for a default setting, having a non-resonate pattern greater than one day is a good idea. However, since you likely know your environment, it’s best to change this. I recommend setting to a fixed time like 4:00am if you’re on the East coast of the US, 1:00am on the West coast, or whatever seems to make sense for your audience when you have the least amount of traffic. Setting it to a fixed time each day during low traffic times will minimize the impact and also allow you to troubleshoot easier if you run into any issues. If you have multiple application pools it may be wise to stagger them so that you don’t overload the server with a lot of simultaneous recycles. Note that IIS overlaps the app pool when recycling so there usually isn’t any downtime during a recycle. However, in-memory information (session state, etc) is lost. See this video if you want to learn more about IIS overlapping app pools. You may ask whether a fixed recycle is even needed. A daily recycle is just a band-aid to freshen IIS in case there is a slight memory leak or anything else that slowly creeps into the worker process. In theory you don’t need a daily recycle unless you have a known problem. I used to recommend that you turn it off completely if you don’t need it. However, I’m leaning more today towards setting it to recycle once per day at an off-peak time as a proactive measure. My reason is that, first, your site should be able to survive a recycle without too much impact, so recycling daily shouldn’t be a concern. Secondly, I’ve found that even well behaving app pools can eventually have something sneak in over time that impacts the app pool. I’ve seen issues from traffic patterns that cause excessive caching or something odd in the application, and I’ve seen the very rare IIS bug (rare indeed!) that isn’t a problem if recycled daily. Is it a band-aid? Possibly, but if a daily recycle keeps a non-critical issue from bubbling to the top then I believe that it’s a good proactive measure to save a lot of troubleshooting effort on something that probably isn’t important to troubleshoot. However, if you think you have a real issue that is being suppressed by recycling then, by all means, turn off the auto-recycling so t[...]

Using WebDAV with ARR

Tue, 29 Jan 2013 17:33:32 GMT

Application Request Routing (ARR) is a great solution for load balancing and other proxying needs. I’m a big fan and have written often about it.

I had someone ask me this week about WebDAV support for ARR. With his ARR setup, he noted that WebDAV creation, uploads, and downloads worked well, but renaming and moving files did not. He aptly noticed in the IIS logs that the source IP address showed as being from the ARR server rather than from the original user.

Here’s what the IIS log recorded:

2013-01-28 10:36:20 MOVE /Documents/test 80 User Microsoft-WebDAV-MiniRedir/6.1.7601 400 1 0 31

The IP Address is an IP address on the ARR server. And yes, the real IP address has been replaced with a made-up IP to protect the innocent.
To be honest, I haven’t tested WebDAV with ARR yet, but the issue sounded like the proxy-in-the-middle issue. I suggested installing ARRHelper on the web servers, and sure enough, that took care of it. While I didn’t setup a full repro myself, he confirmed that it worked for him after the ARRHelper install so I feel quite confident in saying that WebDAV will work with ARR as long as ARRHelper is installed.

So for anyone working with WebDAV and ARR and it doesn’t work for you, it’s likely that ARRHelper needs to be installed on the web servers.

You can find ARRHelper here, and if interested, I have a short video explaining what it is and why it’s important.

Handing MVC paths with dots in the path

Wed, 16 Jan 2013 14:37:42 GMT

A friend of mine asked me recently how to handle a situation with a dot (.) in the path for an MVC project.  The path looked something like this: The MVC routes didn’t work for this path and the standard IIS 404 handler served up the page instead. However, the following URL did work: The only difference is the trailing slash. For anyone that runs into the same situation, here’s the reason and the solution. What causes this inconsistency The issue with the first path is that IIS can’t tell if the path is a file or folder. In fact, it looks so much like a file with an extension of .0 then that’s what IIS assumes that it is. However, the second path is fine because the trailing slash makes it obvious that it’s a folder. We can’t just assign a wildcard handler to MVC in IIS for all file types because that will break files like .css, , .png, and other files are processed as static images. Note that this would not be an issue if the dot is in a different part of the path. It’s only an issue if the dot is in the last section. For example, the following would be fine: So how do we resolve it? As I mentioned, we can’t simply set a wildcard handler on it because it will break other necessary static files. So have about three options: You could always change the paths in your application. If you’re early enough in the development cycle that may be an option for you. Add file handlers for all extension types that you may have. For example, add a handler for .0, another for .1, etc. Use a URL rewriter like IIS URL Rewrite to watch for a particular pattern and append the training slash. Let’s look at these second two options. I’ll mention now up front that the URL Rewrite solution is probably the best bet, although I’ll cover both in case you prefer the handler method. Http Handler Solution You can handle specific extensions by adding a handler for each possible extension that you’ll support. This maps to System.Web.UI.PageHandlerFactory so that ASP.NET MVC has a handle on it and you can create an MVC route for it. This would apply to ASP.NET, PHP, or other frameworks too. C:\Windows\System32\inetsrv\appcmd.exe set config "Sitename" -section:system.webServer/handlers /+"[name='.0-PageHandlerFactory-Integrated-4.0',path='*.0',verb='GET,HEAD,POST,DEBUG',type='System.Web.UI.PageHandlerFactory',preCondition='integratedMode']" /commit:apphostC:\Windows\System32\inetsrv\appcmd.exe set config "SiteName" -section:system.webServer/handlers /+"[name='.1-PageHandlerFactory-Integrated-4.0',path='*.1',verb='GET,HEAD,POST,DEBUG',type='System.Web.UI.PageHandlerFactory',preCondition='integratedMode']" /commit:apphost Run this appcmd command from the command prompt to create the handlers. Make sure to update “Sitename” with your own site name, or leave it off to make it a server-wide change. And you can update ‘*.0’, ‘*.1’ to your extensions. If you do create the site-level rule, make sure to save your web.config back to your source control so that you don’t overwrite it on your next site update. IIS URL Rewrite Solution Probably the best solution, and the one that my friend used in this case, is to use URL Rewrite to add a trailing slash when needed. The advantage of this is that you can use a more general pattern to redirect the URL rather than a bunch of handlers for each specific extension. This assumes that you have IIS 7.0 or greater and that you have URL Rewrite installed. If you’re not familiar with URL Rewrite, check out the URL Rewrite articles on my blog (start with this one). Note: If you want, you can skip this section and jump right to the next section for an easier way to do this using URL Rew[...]

Windows 8 / IIS 8 Concurrent Requests Limit

Tue, 13 Nov 2012 22:14:00 GMT

IIS 8 on Windows Server 2012 doesn’t have any fixed concurrent request limit, apart from whatever limit would be reached when resources are maxed.

However, the client version of IIS 8, which is on Windows 8, does have a concurrent connection request limitation to limit high traffic production uses on a client edition of Windows.

Starting with IIS 7 (Windows Vista), the behavior changed from previous versions.  In previous client versions of IIS, excess requests would throw a 403.9 error message (Access Forbidden: Too many users are connected.).  Instead, Windows Vista, 7 and 8 queue excessive requests so that they will be handled gracefully, although there is a maximum number of requests that will be processed simultaneously.

Thomas Deml provided a concurrent request chart for Windows Vista many years ago, but I have been unable to find an equivalent chart for Windows 8 so I asked Wade Hilmo from the IIS team what the limits are.  Since this is controlled not by the IIS team itself but rather from the Windows licensing team, he asked around and found the authoritative answer, which I’ll provide below.

Windows 8 – IIS 8 Concurrent Requests Limit

Windows 8 (Basic edition) 3
Windows 8 Professional, Enterprise 10
Windows RT N/A since IIS does not run on Windows RT

Windows 7 – IIS 7.5 Concurrent Requests Limit

Windows 7 Home Starter 1
Windows 7 Basic 1
Windows 7 Premium 3
Windows 7 Ultimate, Professional, Enterprise 10

Windows Vista – IIS 7 Concurrent Requests Limit

Windows Vista Home Basic (IIS process activation and HTTP processing only) 3
Windows Vista Home Premium 3
Windows Vista Ultimate, Professional, Enterprise 10

Windows Server 2003, Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 allow an unlimited amount of simultaneously requests.

URL Rewrite – Protocol (http/https) in the Action

Wed, 07 Nov 2012 14:04:02 GMT

IIS URL Rewrite supports server variables for pretty much every part of the URL and http header. However, there is one commonly used server variable that isn’t readily available.  That’s the protocol—HTTP or HTTPS. You can easily check if a page request uses HTTP or HTTPS, but that only works in the conditions part of the rule.  There isn’t a variable available to dynamically set the protocol in the action part of the rule.  What I wish is that there would be a variable like {HTTP_PROTOCOL} which would have a value of ‘HTTP’ or ‘HTTPS’.  There is a server variable called {HTTPS}, but the values of ‘on’ and ‘off’ aren’t practical in the action.  You can also use {SERVER_PORT} or {SERVER_PORT_SECURE}, but again, they aren’t useful in the action. Let me illustrate.  The following rule will redirect traffic for http(s):// to The problem is that it forces the request to HTTP even if the original request was for HTTPS. Interestingly enough, I planned to blog about this topic this week when I noticed in my twitter feed yesterday that Jeff Graves, a former colleague of mine, just wrote an excellent blog post about this very topic.  He beat me to the punch by just a couple days.  However, I figured I would still write my blog post on this topic.  While his solution is a excellent one, I personally handle this another way most of the time.  Plus, it’s a commonly asked question that isn’t documented well enough on the web yet, so having another article on the web won’t hurt. I can think of four different ways to handle this, and depending on your situation you may lean towards any of the four.  Don’t let the choices overwhelm you though.  Let’s keep it simple, Option 1 is what I use most of the time, Option 2 is what Jeff proposed and is the safest option, and Option 3 and Option 4 need only be considered if you have a more unique situation.  All four options will work for most situations. Option 1 – CACHE_URL, single rule There is a server variable that has the protocol in it; {CACHE_URL}.  This server variable contains the entire URL string (e.g.  All we need to do is extract the HTTP or HTTPS and we’ll be set. This tends to be my preferred way to handle this situation. Indeed, Jeff did briefly mention this in his blog post: … you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions. Thus the problem.  If you have multiple conditions set to “Match Any” rather than “Match All” then this option won’t work.  However, I find that 95% of all rules that I write use “Match All” and therefore, being the lazy administrator that I am I like this simple solution that only requires adding a single condition to a rule.  The caveat is that if you use “Match Any” then you must consider one of the next two options. Enough with the preamble.  Here’s how it works.  Add a condition that checks for {CACHE_URL} with a pattern of “^(.+)://” like so: How you have a back-reference to the part before the ://, which is our treasured HTTP or HTTPS.  In URL Rewrite 2.0[...]

Reading a memory.dmp or other .dmp file

Wed, 18 Jul 2012 13:40:00 GMT

While the dreaded Blue Screen of Death (BSOD) occurs less frequently with newer versions of Windows than it did in years past, there are still times when the BSOD reveals itself.  I just ran into four BSOD’s on two Windows Server 2012 machines and I had the ‘opportunity’ to analyze a memory.dmp file today, so I thought I would post quick instructions on how to get a handy summary of the memory dump. I’ve had this ”I Found a Fix” debugging page bookmarked for years and I’ve used it many times, so I need to give full credit to ifoundafix for their helpful steps.  The only change I have below is to include updated paths. It’s possible to debug remotely, and you may have requirements to do that.  My quick instructions here are for local debugging.  The debugging tools are very stable and if you install just what you need then they are small and a quick install, so running this on a production machine is generally safe, but you must make that decision for your particular environment. This can be accomplished with 7 easy steps: Step 1. Obtain and install the debugging tools.  The links do change over time, but the following link is currently an exhaustive page which includes Windows Server 2012 and Windows 8 Consumer debugger tools, Windows 7, Vista, XP and Windows Server 2003. Debugging Tools Windows All you need to install is the “Install Debugging Tools for Windows as a Standalone Component (from Windows SDK)” and during the install only select "Debugging Tools for Windows".  Everything else is used for more advanced troubleshooting or development, and isn’t needed here.  Today I followed the link to “Install Debugging Tools for Windows as a Standalone Component (from Windows SDK)” although for a different OS you may need to follow a different link. Step 2. From an elevated command prompt navigate to the debugging folder. For me with the latest tools on Windows Server 2012 it was at C:\Program Files (x86)\Windows Kits\8.0\Debuggers\x64\.  You can specify the path during the install.Step 3. Type the following: kd –z C:\Windows\memory.dmp (or the path to your .dmp file) Step 4. Type the following: .logopen c:\debuglog.txt Step 5. Type the following: .sympath srv*c:\symbols* Step 6. Type the following: .reload;!analyze -v;r;kv;lmnt;.logclose;q Step 7. Review the results by opening c:\debuglog.txt in your favorite text editor.  Searching for PROCESS_NAME: will show which process had the fault.  You can use the process name and other information from the dump to find clues and find answers in a web search.  Usually the fault is with a hardware drivers of some sort, but there are many things that can cause crashes so the actual analyzing of the dump may take some research. Often times a driver update will fix the issue.  If the summary information doesn’t offer enough information then you’ll need to dig further into the debugging tools or open a CSS case with Microsoft.  The steps above will provide you with a summary mostly-human-readable report from the dump.  There is much more information available in the memory dump although it gets exponentially more difficult to track down the details the further you get into windows debugging. Hopefully these quick steps are helpful for you as you troubleshoot the unwelcome BSOD.[...]

Last Day With OrcsWeb

Sat, 19 May 2012 03:29:20 GMT

It’s hard to believe that it’s been 10 years since my first day at OrcsWeb. Today is my last official day, but I’ll still be close by. I have a number of ties here, including being a customer through Vaasnet.

So much has changed in this time. Ten years ago I began working for OrcsWeb from Canada. Nine years ago I moved my family down here to North Carolina and assumed the role of Director of Technology.  I was able to be a part of the company as it grew in staff, servers, customers, and reputation. I feel honored to be a part of OrcsWeb during these exciting years.

During my time at OrcsWeb I have been given opportunities to attend conferences, meet and become friends with top technical experts in the field, write articles, co-author two books, and speak at conferences and code camps. It was through OrcsWeb that I was given opportunities to be active in the community, to become a Microsoft MVP and an ASPInsider.

I’m grateful to Brad and Karla Kingsley who have always treated me like more than an employee. They have always encouraged me to grow and to pursue my dreams.

I’m thankful to Jeff Graves who has been accommodating to my evolving schedule and less than full time availability.  And in terms of technical smarts, Jeff tops the list!  And I’m also thankful of the rest of the team at OrcsWeb who are experts in the field, and with whom it’s always been a privilege to work.

Moving forward, I have two main focuses. I’ll be able to spend more time on Vaasnet (a company I co-founded with Jeff Widmer) to see the company position itself further in the market and to strengthen both the product and the brand.  Additionally, I’m working in a part time basis with Dynamicweb an established CMS and eCommerce company in Europe who is just moving into the US market. Dynamicweb has a strong product already and I’m excited to work with the leadership team in the US. Expect to see more of Dynamicweb in the coming months and years.

I just want to reiterate a big thanks to OrcsWeb for helping write such an important chapter in my life. And it’s with excitement that I look forward to the next chapter of my life.

Introducing Testing Domain -

Mon, 14 May 2012 15:59:54 GMT

Save this URL, memorize it, write it on a sticky note, tweet it, tell your colleagues about it! (


* (

If you do any testing on your local system you’ve probably created hosts file entries (c:\windows\system32\drivers\etc\hosts) for different testing domains and had them point back to  This works great but it requires just a bit of extra effort.

This trick is so obvious, so simple, and yet so powerful.  I wouldn’t be surprised if there are other domain names like this out there, but I haven’t run across them yet so I just ordered the domain name which I’ll keep available for the internet community to use.

Here’s how it works. The entire domain name—and all wildcard entries—point to  So without any changes to your host file you can immediate start testing with a local URL.


You name it, just use any * URL that you dream up and it will work for testing on your local system.

This was inspired by a trick that Imar Spaanjaars introduced me to. He created a loopback wildcard URL with his company domain name.  I took this one step further and ordered a domain name just for this purpose.

I would have liked to order or but those domain names were taken. So to help you remember, just remember that it’s ‘localtest’ and not ‘localhost’, and it’s ‘.me’ rather than ‘.com’.

I can’t track usage since the domain name resolves to and never passes through my servers, so this is just a public tool which I’ll give to the community. I hope it gets used. And, since I can’t really use the domain name to explain itself, please spread the word and tell others about it.

Some examples on how to use it would include:

  • Creating websites on your dev machine.,,
  • Great for URL Rewrite (IIS) or mod_rewrite (Apache) testing:,,,
  • Any testing on your local system where a friendly URL would be useful.

I hope you enjoy!

Google and Geo-location, CNDs, DNS Load Balancing-Week 50

Fri, 20 Apr 2012 01:24:18 GMT

You can find this week’s video here.

This week answers two Q&A questions from viewers. DNS Load Balancing and then some discussion and a walkthrough using Application Request Routing (ARR) for a Content Delivery Network (CDN).

There’s a growing movement towards Content Delivery Networks (CDN); fronting web farms and geographically dispersing websites. This week I continue with Q&A’s from viewers, taking questions on DNS Load Balancing and CDNs.

Question 1:

I would love to see some clever DNS load balancing (not sure what capability windows is offering). Flesik

Question 2a:

I would love to see an end-2-end CDN redundant network setup (dns balancing, ARR nodes, parent notes etc). Flesik

Question 2b:

I would be interested in seeing a series on building an ecdn or ecn using ARR and what the best practices would be to scale it out geographically.

It seems ARR is sold that way... really not sold but talked about. I have tried to put my theory out and try it but i just don't know the best way to route my clients to their designated locations. Can you help out with an awesome weblog maybe? Adam


The following URL is the one I mentioned in the video:

In this week’s video we look at DNS load balancing and geo-location issues that Google faces by using DNS to determine a user’s location. We also take a look at using Microsoft Application Request Routing (ARR) to create a CDN.

This is week 50 of a 52 week series for the web pro. You can view past and future weeks here:

You can find this week’s video here.

What’s new in IIS8, Perf, Indexing Service-Week 49

Tue, 27 Mar 2012 13:37:00 GMT

You can find this week’s video here.

This week I'm taking Q&A from viewers, starting with what's new in IIS8, a question on enable32BitAppOnWin64, performance settings for, the ARR Helper, and Indexing Services.

This week we look at five topics.

We take a look at the new features in IIS8. Last week Internet Information Services (IIS) 8 Beta was released to the public. This week's video touches on the upcoming features in the next version of IIS. Here’s a link to the blog post which was mentioned in the video
Question 1:

In a number of places (,, I've saw that enable32BitAppOnWin64 is recommended for performance reasons. I'm guessing it has to do with memory usage... but I never could find detailed explanation on why this is recommended (even Microsoft books are vague on this topic - they just say - do it, but provide no reason why it should be done). Do you have any insight into this? (Predrag Tomasevic)

Question 2:

Do you have any recommendations on modifying aspnet.config and machine.config to deliver better performance when it comes to "high number of concurrent connections"? I've implemented recommendations for modifying machine.config from this article ( - ASP.NET Process Configuration Optimization section)... but I would gladly listen to more recommendations if you have them. (Predrag Tomasevic)

Question 3:

Could you share more of your experience with ARR Helper? I'm specifically interested in configuring ARR Helper (for example - how to only accept only X-Forwards-For from certain IPs (proxies you trust)). (Predrag Tomasevic)

Question 4:

What is the replacement for indexing service to use in coding web search pages on a Windows 2008R2 server? (Susan Williams)

Here’s the link that was mentioned:

This is now week 49 of a 52 week series for the web pro. You can view past and future weeks here:

You can find this week’s video here.

What’s New in IIS 8

Thu, 01 Mar 2012 20:07:00 GMT

With the beta release of Windows Server 8 today, Internet Information server (IIS) 8 is available to the public for testing and even production workload testing.  Many system administrators have been anxious to kick the tires and to find out which features are coming. I’ll include a high level overview of what we will see in the upcoming version of IIS.  The focus with this release of IIS 8 is on the large scale hoster.  There are substantial performance improvements to handle thousands of sites on a single server farm—with ease.  Everything that I mention below is available for download and usage today. Forgive me if there are typos.  I’m writing this while at the MVP Summit in Seattle while trying to listen to another session at the same time.  Thanks to the IIS team who gave detailed demos on this yesterday and gave me permission to talk about this. Real CPU Throttling Previous versions of IIS have CPU throttling but it doesn’t do what most of us want.  When a site reaches the CPU threshold the site is turned off for a period of time before it is allowed to run again.  This protects the other sites on the server but it isn’t a welcome action for the site in question since the site breaks rather than just slowing down.  Finally in version IIS 8 there are kernel level changes to support real CPU Throttling.  Now there are two new actions for sites that reach the CPU threshold.  They are Throttle and Throttle under load.  If you used WSRM to achieve this in the past, you no longer need to do so, and the functionality is improved over what is available with WSRM. The throttle feature will keep the CPU for a particular worker process at the level specified.  Throttling isn’t applied to just the primary worker process, but it also includes all child processes, if they happen to exist. The Throttle under load feature will allow a site to use all possible CPU if it’s available while throttling the worker process if the server is under load. The throttling is based on the user and not specifically on the application pool. This means that if you use dedicated users on more than one app pool then it throttles for all of app pools sharing the same user identity. Note that the application pool identity user is unique so if you use the app pool identity user—which is common—then each app pool will be throttled individually. This is a welcome new feature and is nicely implemented. SSL Scalability Unless you deal with large scale site hosting with many SSL certificates you may not have realized that there is room for improvement in this area.  Previous versions of IIS have limited secure site density.  Each SSL site requires its own IP address and after adding a few SSL sites, startup performance becomes slow and the memory demand is high.  Every certificate is loaded into memory on the first visit to an SSL site which creates a large memory footprint and a long delay on the first load.  In IIS 8 the SSL certificate count is easily scalable to thousands of secure sites per machine with almost instantaneous first-loads.  Only the certificate that is needed is loaded and it will unload after a configurable idle period.  Additionally, enumerating or loading huge numbers of certificates is substantially improved. SNI / SSL Host Header Support Using host headers and a shared IP address with SSL certificate has always been problematic.  IIS 8 now offers Server Name Indication (SNI) support which allows many SSL sites to share the same IP.  SNI is a fairly new feature ([...]

IIS FTP Troubleshooting-Week 48

Tue, 21 Feb 2012 15:58:57 GMT

You can find this week’s video here.

This lesson covers ways to troubleshoot IIS FTP. When it works, it works well, but if you run into issues getting an FTP account working it can sometimes be difficult to resolve. This video will help you understand some helpful tricks and it will walk you through ways to isolate and resolve the issue.

Over the last five weeks we’ve been looking at IIS FTP. See the list below to jump to a specific FTP topic.  This week we explore some troubleshooting techniques and review the following FTP connectivity stack.

  • DNS Resolution/Network Connectivity
  • Firewall Access (Passive/Active / Secure?)
  • IIS Bindings
  • Authentication
  • Authorization
  • Isolation Mode / File paths
  • NTFS Permissions

There were two external resources which I referenced. They are:

This is now week 48 of a 52 week series for the web pro and it is the final of a 5-week mini-series on IIS FTP. The five weeks include:

You can find this week’s video here.

FTP Firewall Settings, Active vs. Passive, and FTPS Explicit vs. Implicit-Week 47

Mon, 13 Feb 2012 14:28:48 GMT

You can find this week’s video here.

Have you ever wondered what FTP Active mode or Passive mode means? Do you have a good understanding of the FTP data channel or control channel? It can be difficult to fully understand FTP, which firewall ports to enable, and how to navigate the two communication channels. This lesson will hopefully clear up these questions and more.

This week’s video lesson takes a deep dive into FTP Active vs. Passive modes. As part of this you’ll get a chance to see the various modes in action, see what the traffic looks like in Wireshark, see exact firewall rules, learn about stateful FTP, find out about Explicit FTPS and Implicit FTPS, and learn about the FTP data channel and control channels.

This week's video lesson is the 4th of a 5-week mini-series on IIS FTP. The five weeks include:

  • Week 1: IIS FTP Basics
  • Week 2: IIS FTP and IIS Manager Users
  • Week 3: IIS FTP and User Isolation
  • Week 4: IIS FTP Firewall settings, Active vs. Passive
  • Week 5: IIS FTP Troubleshooting plus FTP Host Headers

This is now week 47 of a 52 week series for the web pro, and the 4th of a 5 week mini-series on IIS FTP. You can view past and future weeks here:

You can find this week’s video here.

Flush IIS HTTP and FTP Logs to Disk

Sat, 04 Feb 2012 04:11:13 GMT

Today I wanted to find a way to flush the IIS FTP logs on-demand.  The logs for IIS FTP flush to disk every 6 minutes, and the HTTP logs every 1 minute (or 64kb).  This can make troubleshooting difficult when you don’t receive immediate access to the latest log data. After looking everywhere I could think of, from search engine searches to perusing through the IIS schema files, I figured I had better go to the source and ask Robert McMurray. Sure enough, Robert had the answer and even wrote a blog post in response to my question with code examples for four scripting/programming languages (C#, VB.NET, JavaScript, VbScript). There is not a netsh or appcmd solution though, so the scripting or programming options are the way to do it.  Actually, you can also flush the logs by restarting the Microsoft FTP Service (ftpsvc) but, as you would assume, it will impact currently active FTP sessions. This blog post serves three purposes.  It’s a reference pointing to Robert’s examples I’ll include how to do the same for the HTTP logs I’ll provide a PowerShell example which I based on Robert’s examples 1. The reference is mentioned above already, but to give me something useful to write in this paragraph, I’ll include it again. Programmatically Flushing FTP Logs. 2. For HTTP there is a method to flush the logs using netsh. netsh http flush logbuffer .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This will immediately flush the HTTP logs for all sites. 3. The FTP logs can be done from PowerShell too.  Here’s a script which is the PowerShell equivalent of Robert’s examples.  Just update $siteName, or pass it as a parameter to the script. Param($siteName = "Default Web Site")   #Get MWA ServerManager [System.Reflection.Assembly]::LoadFrom( "C:\windows\system32\inetsrv\Microsoft.Web.Administration.dll" ) | Out-Null $serverManager = new-object Microsoft.Web.Administration.ServerManager   $config = $serverManager.GetApplicationHostConfiguration()   #Get Sites Collection $sitesSection = $config.GetSection("system.applicationHost/sites") $sitesCollection = $sitesSection.GetCollection()   #Find Site foreach ($item in $sitesCollection){ if ($item.Attributes.Item("Name").Value -eq $siteName){ $site = $item } } #Validation if ($site -eq $null) { Write-Host "Site '$siteName' not found" return } #Flush the Logs $ftpServer = $site.ChildElements.Item("ftpServer") if (!($ftpServer.ChildElements.Count)){ Write-Host "Site '$siteName' does not have FTP bindings set" return } $ftpServer.Methods.Item("FlushLog").CreateInstance().Execute() I hop[...]