Thu, 14 Feb 2008 14:22:00 GMT
Recently, I've been doing a lot of work with streaming under an IIS hosted WCF service (using the BasicHttpBinding). I ran into a rather strange issue, that appears to be an oversight deep inside the internals of WCF. Enabling streaming on a binding is typically set when you anticipate sending large objects across the wire, and do not want to buffer up the entire message prior to sending. When streaming mode is enabled, the headers of the message are buffered and the body is exposed as a stream and sent across the wire. When you host your service inside a console app or windows service, everything works as expected (the body is indeed streamed no buffering at all). Now comes the issue...
When you host your service under IIS, no matter if you enable streaming or not, your service will buffer the entire message prior to sending it. The reason for this, is that it appears as though WCF does not set the Response.BufferOutput to "false" (default is true), when streaming is enabled on a service. This seems to be an oversight in my opinion, that could be rectified in the framework code. So the good news is, there is a way around this issue:
Since we want to somehow set that Response.BufferOutput to false, we need to get at the HttpContext. The flexibility of WCF comes to our aid here with the ability to enable AspNet Compatibility Mode, there are 3 changes we need to make to the service, to work around this issue.
1) Add the following attribute to your service: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
2) Inside the default constructor for your service, or wherever you want to set the bufferoutput property paste this code:
HttpContext httpContext = HttpContext.Current;
if (httpContext != null)
httpContext.Response.BufferOutput = false;
What we've done here is get access to the HttpContext and manually set the flag, which isn't wonderful, but works around the problem.
Thu, 21 Jun 2007 23:19:00 GMT
I learned yet another binding configuration attribute today, that plays a rather important role if you have a service that could accept a large amount of concurrent calls. By concurrent, I mean dozens of calls that come in at the same time. Of course the standard throttles in this scenario are important, maxConnections on the binding and the various service model throttles (maxConcurrentCalls,maxConcurrentSessions,maxConcurrentInstances). There is however one attribute that you may not be aware of, that if not set correctly will cause your clients to throw a rather nondescript exception like:
"The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:00:59.4383964'."
After spending the better part of a day, changing settings and staring at trace logs, I posted in the MSDN forums an example application demonstrating this behavior. In the end, the solution to the exception was setting the listenBacklog attribute on the binding. What does it do? (from MSDN) "ListenBacklog is a socket-level property that describes the number of "pending accept" requests to be queued. Ensure that the underlying socket queue is not exceeded by the maximum number of concurrent connections." So, ensure that you have this set higher or equal to the amount of maxConcurrentCalls.
Hope that helps someone else staring at a trace log, that doesn't indicate there are any problems with the service config. Hopefully at some point a more descriptive exception is thrown when you hit this binding throttle.
Tue, 02 May 2006 22:24:00 GMT
While working on a decent sized Windows Forms app in c#, I've seen a sporadic issue crop up that is incredibly difficult to debug. The form simply has a listview on it that is populated from a worker thread. The calls are all done with Invoke, on the UI thread. But, every once and awhile (it's sporadic). .NET throws this:
Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object.
at System.Windows.Forms.ListViewItemCollection.get_Item(Int32 displayIndex)
at System.Windows.Forms.ListView.OnHandleDestroyed(EventArgs e)
at System.Windows.Forms.Control.WmDestroy(Message& m)
at System.Windows.Forms.Control.WndProc(Message& m)
at System.Windows.Forms.ListView.WndProc(Message& m)
at System.Windows.Forms.ControlNativeWindow.OnMessage(Message& m)
at System.Windows.Forms.ControlNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
at System.Windows.Forms.ComponentManager.System.Windows.Forms.UnsafeNativeMethods+IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData)
at System.Windows.Forms.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
at System.Windows.Forms.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
at System.Windows.Forms.Application.RunDialog(Form form)
at System.Windows.Forms.Form.ShowDialog(IWin32Window owner)
I can't find out where this is coming from, it has no reference to anything in the app, it's all in the framework.
Anyone seen this?
Thu, 18 Aug 2005 18:01:00 GMTSo, we finally figured out a problem a customer had with Microsoft's help. As posted below the customer had random errors where a temp file that was generated by .NET (confirmed by Microsoft). We ended up using filemon to log disk access while the error occurred, and the culprit was FreeTextBox. We don't use FreeTextBox (although it is a great editor), but somehow it was in the bin directory on the clients server. After we removed that dll the problem went away. If we had been using it, the solution would have probably been to add it to the GAC. If you ever run into this, use the method above as it's more than likely permissions and you'll know pretty quick whats going on with filemon from sysinternals.
Thu, 21 Jul 2005 14:15:00 GMT
We've recently had a customer contact us about an obscure error having to do with a dependency/assembly not found. It looks like it's a temp file produced by .NET during compilation and is for some reason not found afterwards or during. I've dug around and found that there are a couple of solutions to this issue, neither of which worked for the client. One was to disable indexing services so that no read locks would be performed on the temp dir that the .net framework uses, and the other was to register the DLL's into the GAC. Problem is that the DLL or assembly its talking about doesn't exist in the applicaiton it's a temp file. Error is below, if anyone has any ideas I'm all ears.
File or assembly name nwyqbbug, or one of its dependencies, was not found.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.IO.FileNotFoundException: File or assembly name nwyqbbug, or one of its dependencies, was not found.
Assembly Load Trace: The following information can be helpful to determine why the assembly 'nwyqbbug' could not be loaded.
=== Pre-bind state information ===
LOG: DisplayName = nwyqbbug, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null
LOG: Appbase = file:///D:/wwwroot/Forums/Web/Forum
LOG: Initial PrivatePath = bin
Calling assembly : omgp_cyn, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null.
LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind).
LOG: Post-policy reference: nwyqbbug, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null
LOG: Attempting download of new URL file:///C:/WINDOWS/Microsoft.NET/Framework/v1.1.4322/Temporary ASP.NET Files/web_forum/235e9bde/1539afa3/nwyqbbug.DLL.
LOG: Attempting download of new URL file:///C:/WINDOWS/Microsoft.NET/Framework/v1.1.4322/Temporary ASP.NET Files/web_forum/235e9bde/1539afa3/nwyqbbug/nwyqbbug.DLL.
Thu, 16 Jun 2005 13:56:00 GMT
Joel over at Joel on software threw out some interesting commentary about how life at FogBugz palace is, compared to the dismal rags over at Microsoft *laugh*. Scoble responded today with a colorful synopsis of just how bad life is at Microsoft. I don't know about Joel's developers, but I sit in front of a 22" Apple Cinema display and use a Dual 3.6 Xeon with 2GB of memory ,15K SCSI and a Quadro 3400 FX at home.
On a side note, how does bug tracking software get titled as "painless project management"? I've always wondered how you could classify a great bug tracking system as a project management system. They may intertwine, but I don't see how you can micro-manage a project by bugs. There is a lot more to the delivery software than stamping out bugs...
Tue, 10 May 2005 16:09:00 GMT
Thu, 28 Apr 2005 14:49:00 GMT
I've been doing some research on CSS Architecture/Standards, and have come up rather dry. After spending a few hours hammering google with various keywords etc I can't find a single entity publishing any guidelines/standards for CSS development. What I mean by that is CSS architecture as it pertains to the UI development, not the standards of the specification. For example, is it best to use ID declarations for objects, or classes, etc. Has anyone seen anything or read anything that talks about UI architecture using CSS?
Thu, 21 Apr 2005 18:45:00 GMT
It's been a ton of last minute benchmarking, but well worth it. Dual Core is set to make huge tracks in the server market, with impressive scaling. Uniprocessor server configurations might well rise and Dual processor database servers will essentially perform like a 4P system. Intel is still setting their Dual Core launch for Q1 2006. We had a chance in the review to benchmark Intels latest Quad Xeon servers based on their new E8500 chipset, the system we had required 208V, a bit unusual from our experience but apparently doable at most NOC's. So, we had to wire a 240V circuit in the lab. The article covers both desktop and server performance, workstation is on the way. The server stuff is near the beginning and covers web & sql performance.
Tue, 19 Apr 2005 20:04:00 GMT
After digging a bit deeper this week on the built-in Membership/Roles functionality, although it is quite extensive there seems to be a weakness in using it for a more robust security model. Maybe I've missed something, but here goes:
Let's say in a sample application I have roles entitled Manager and Employee. So, within my application I can now say if user is in role X show/do this. Now, let's say you wanted to have attributes to the role "Manager". The "Manager" role can do the following ficticous tasks: Create Users, Delete Users, Update Website, Add Document, essentially creating a group with various permissions.
From what I see there is no more depth beyond the Role...
Tue, 19 Apr 2005 15:16:00 GMT
After installing Beta 2 yesterday, I fired up our application and started working through some code. After verifying some layout code in IE, I launched FireFox and pointed it at the URL to the built-in webserver within Visual Studio. To my surprise, I was greeted with "HTTP Error 403 - Forbidden". Ruh?
Previous copies of Visual Studio didn't behave in this manner, so I dug a bit in the Website option within Visual Studio to find that Microsoft as now enabled NTLM authentication as a default for the built-in webserver. Why? I'm developing on my local PC why would I want to lock my local webserver down by NTLM? Granted, the option should be there, but disabled by default. Anyway, for those wanting to disable this, just go under Website --> Start Options and uncheck NTLM Authentication
Mon, 18 Apr 2005 13:25:00 GMT
It was fairly clear Macromedia was looking to be acquired, it was just a matter of time. Adobe is a bit of a surprise, since the two companies have been at each others throats over the years. Judging by the article, it is an acquisition for Flash, and maybe Flex. Hard to say what the future of ColdFusion and some of their other products will be, but the article makes no mention of them. Thoughts?
Wed, 30 Mar 2005 20:34:00 GMT
When CSS1 first reared it's ugly head in 1996, it was probably one of the most controversial changes in Web UI history. Over the years, we've all come to appreciate to some extent, the intended purpose of CSS. In most cases it does as intended and does it well. But, even after 9 years we're still up to our ears in browser incompatibilities, and until CSS 2 is fully adopted everywhere an un-intuitive way of creating tables in CSS. We all know how well floating divs works in CSS 1 *cough*. How can it take 9 years to replace (whatever you want to call it) a simple tag based language (HTML)?
All to often I've ran into the "Purist" movement, and been given the proverbial speech "Don't use