Subscribe: Miscellaneous Debris
Added By: Feedage Forager Feedage Grade B rated
Language: English
code  color csharpcode  color  csharpcode  exception  file  method  missingvalue  new  settings  static constructor  static  string   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Miscellaneous Debris

Miscellaneous Debris

Avner Kashtan's Frustrations and Exultations


When Not-True Doesn’t Equal False

Sun, 21 Feb 2016 06:36:23 GMT

There is a strangely common, reoccurring bug I’ve seen return, again and again, in most projects I’ve worked on in recent years. I call it the “Not True != False” bug, and it crops up, usually, when passing query parameters to a search service of some sort, when you want to search for entities who match a certain condition, and you end up not being able not to search by that condition.

Confusing? Let’s back up for a second.

[Continued on Bits and Pieces, the new location of my blog]

Attaching on Startup

Thu, 23 Jan 2014 07:21:35 GMT

Here’s a neat trick I stumbled onto that can make life a lot easier – in our development workstations, certainly, and even in a pinch on staging and testing machines: How many times have you wished you could attach to a process immediately when it launchs? If we’re running a simple EXE, we can launch it from the debugger directly, but if we’re trying to debug a Windows Service, a scheduled task or a sub-process that is launched automatically, it isn’t that simple. We can try to press CTRL-ALT-P as quickly as possible, but that will almost always miss the very beginning. We can add a call to System.Diagnostics.Debugger.Launch() to our Application_Startup() function (or equivalent), but that’s polluting production code with debugging statements, and it isn’t really something we can send over to a staging or QA machine. I was all set to write a tool to monitor new processes being launched and latch onto them when I discovered that there’s no need, and Windows provides me with this facility automatically! All we need to do is create a new registry key under HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Image File ExecutionOptions and name it after the EXE we want to attach to. Under this key, we create two values: A DWORD named PauseOnStartup with a value of 1, and a string named Debugger, with a value of vsjitdebugger.exe. Here’s a sample .REG file
Code Snippet
  1. Windows Registry Editor Version 5.00
  2. [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\MyApplication.exe]
  3. "PauseOnStartup"=dword:00000001
  4. "Debugger"="vsjitdebugger.exe"
  Now, when MyApplication.exe launches, it will automatically launch the familiar “Choose the debugger to use” dialog, and we can point it to our solution of choice.

Not-So-Lazy Static Constructors in C# 4.0

Mon, 25 Feb 2013 09:55:04 GMT

A coworker pointed me towards an interesting blog post by John Skeet about the changed behavior of static constructors in C# 4.0 (yes, I know, it’s been a few years now, but I never ran into it). It seems that C# 4.0 now tries to be lazier when instantiated static fields. So if I have a side-effect free static method that doesn’t touch any members, calling it will not bother initializing the static fields. From Skeet’s post: class Lazy     {         private static int x = Log();         private static int Log()         {             Console.WriteLine("Type initialized");             return 0;         }         public static void StaticMethod()         {             Console.WriteLine("In static method");         }     } Calling StaticMethod() will print “In static method”, but will not print “Type Initialized”! This was a bit worrying. John Skeet said it shouldn’t impact existing code, but we were not so sure. Imagine a class whose static constructor does some unmanaged initialization work (creates a Performance Counter, for instance) while the static method  writes to the counter. This could potentially cause a hard to find exception, since our expectation was that static constructors can be relied upon to always be called first. So we ran a quick test (and by “we” I most mean Amir), and it seems that this behavior isn’t as problematic as we thought. Look at this code, adding a static constructor to the class above: class Lazy     {         private static int x = Log();         private static int Log()         {             Console.WriteLine("Type initialized");             return 0;         }           static Lazy()         {             Console.WriteLine("In static constructor");         }           public static void StaticMethod()         {             Console.WriteLine("In static method");         }     } The only difference is that we have an explicit static constructor, rather than the implicit one that initializes the field. Unlike the first test case, in this case calling StaticMethod() did call the static constructor, and we could see “In static constructor” printed before “In static method”. The compiler is smart enough to see that we have an explicit constructor defined, so that means we want it to be called, and it will be called. This was reassuring. But wait, there’s more! It seems that once type initialization was triggered by the presence of the explicit static constructor, it went all the way. Even the x parameter was initialized, the Log() method was called, and “Type Initialized” was printed to the console, even before the static constructor. This was the behavior I was used to, where static field initializations are added to the beginning of the .cctor. To summarize, the new lazy type initialization behavior for C# 4.0 is interesting, since it allows static classes that contain only side-effect free methods (for instance, classes containing popular Extension Methods) to avoid expensive and unnecessary initializations. But it was designed smartly enough to recognize when initialization is explicitly desired, and be a bit less lazy in that case. (And thanks again to Igal Tabachnik and Amir Zuker)[...]

האם עוד צריך את VB.NET לתכנת מול אופיס?

Tue, 29 Jan 2013 20:37:17 GMT

עברו כמה שנים מאז שעבדתי עם Visual Basic.NET, והייתי בטוח שבימינו, ב-2013, נגמרו כבר הויכוחים של “איזו שפה יותר טובה”. שתיהן שפות עם יכולות דומות, והבחירה ביניהן היא בעיקר העדפה סגנונית. אבל מידי פעם אני עדיין רואה בפורומים או ב-Stack Overflow שאלה בנוגע ליתרונות של שפה אחת על השניה, וספציפית על יכולות ה-COM Interop של VB. האם היא באמת שפה נוחה יותר? המצב טרום .NET 4.0 לפני .NET 4.0 (שיצאה, יש להזכיר, לפני כמעט 3 שנים), התשובה היתה “כן”, אם כי “כן” קצת דחוק. VB הגיעה מתוך עולם ה-Late Binding, ולא היה לה בעיה להעביר לרכיב רק חלק מהפרמטרים שהם צריכים, ולדאוג לשאר לברירות מחדל מאחורי הקלעים. היתרון הזה היה בולט במיוחד כשעובדים מול אובייקטים של אופיס, שמכילים מתודות שמקבלות חמישה, עשרה, ולפעמים שישה-עשר פרמטרים אופציונאליים, כשברוב המקרים אנחנו נרצה להעביר רק אחד או שניים מהם, אם בכלל. ההבדל בין C# ל-VB במקרה כזה הוא ההבדל בין הקוד הזה ב-C#: Code Snippet myWordDocument.SaveAs(ref fileName, ref missingValue,                  ref missingValue, ref missingValue,                  ref missingValue, ref missingValue,                  ref missingValue, ref missingValue,                  ref missingValue, ref missingValue,                  ref missingValue, ref missingValue,                  ref missingValue, ref missingValue,                  ref missingValue, ref missingValue); לבין זה ב-VB: Code Snippet MyWordDocument.SaveAs(FileName: fileName) שזה, אתם חייבים להודות, הבדל לא קטן. אבל מצד שני, לא כל ממשק COM עובד ככה. למען האמת, בכל שנותי אני חושב שנתקלתי בסגנון כתיבה הזה רק בספריות של אופיס, ולא בשום רכיב אחר, בין אם של מיקרוסופט (כמו רכיבי DirectShow שאיתם עבדתי לאחרונה) או של חברות אחרות. ברוב המקרים, לעשות COM Interop ב-C# פשוט באותה מידה כמו ב-VB. .NET 4.0 – גם C# זוכה ליחס היתרון הקטן הזה של VB נהיה גם הוא פחות רלבנטי החל מ-C# 4.0 ו-Visual Studio 2010, משתי סיבות: הראשונה היא מילת הקסם שהתווספה לה לשפה, dynamic. כשאנחנו מגדירים אובייקט כדינאמי, אנחנו מוותרים על הרבה מה-static type checking שהקומפיילר עושה בשבילנו, ונותנים למנגנון ה-late binding (שסופח ל-C# באמצעות ה-DLR, ה-Dynamic Language Runtime שנבנה לתמוך בשפות דינאמיות כמו Python או Ruby) לעשות בשבילנו את העבודה ב-runtime. מכיוון שיש late binding, אפשר להתעלם מהפרמטרים המיותרים ולתת ל-DLR להבין לבד מה לחבר לאיפה. אפשר למצוא הסבר כללי על שימוש ב-dynamic במאמר הזה, והסבר יותר ספציפי על השימוש בו ל-COM Interop במאמר הזה. אבל אם אנחנו שקלנו לעבור ל-VB רק בשביל לעבוד מול רכיבי אופיס, אז C# 4.0 מאפשר לנו להקל על החיים גם בלי לצלול לעולם העכור של dynamic languages. אנחנו עדיין לא יכולים להתעלם מ-ref parameters במתודות של מחלקות, אבל אנחנו כן יכולים ל[...]

Remote Debugging through fire, snow or fog

Thu, 19 Jul 2012 10:23:10 GMT

Remote Debugging in Visual Studio is a wonderful feature, especially during the later stages of testing and deployment, and even (if all else fails) when in production, but getting it to work is rarely smooth. Everything is fine if both the computer running VS and the one running the application are in the same domain, but when they aren’t, things start to break, fast. I

So for those stuck debugging remote applications in different domains, here’s a quick guide to east the pain.

  1. On the the remote machine, install the Visual Studio Remote Debugging package, which comes with the version of Visual Studio you’re using.
  2. On the remote machine, create a new local user account. Give it the exact same name and password that you use on your development machine.
    If you’re developing when logged into MYCORPDOMAIN\MyUserName, with password MYPASS123, you wil have to create a local user REMOTEMACHINE\MyUserName with the same password. It doesn’t matter that the domains are different and there’s no trust relationship and so forth. Just have them be the same username and password.
  3. Give REMOTEMACHINE\MyUserName permissions. Administrator permissions is the safest, though you should be able to get it to work with a more restricted group, but I haven’t checked it yet.
  4. Run the application you want to debug using the MyUserName credentials. In Windows 7, this means using Shift-Rightclick and choosing Run As Different User. For Vista, there’s a shell extension to enable it.
  5. Run the Remote Debugging program, again under MyUserName credentials, same as above. The Remote Debugger will start with a new session named MyUserName@REMOTEMACHINE.
  6. Copy the remote debugging session name from the remote machine (you can copy it via the Tools –> Options menu).
  7. In your development machine, open Visual Studio, and go to Debug –> Attach to Process.
  8. In the Attack to Process screen, paste the remote debugging session name into the Qualifier textbox (the second one from the top).
  9. Voila! You are now debugging remotely!

ArcGIS–Getting the Legend Labels out

Tue, 27 Mar 2012 08:18:19 GMT

Working with ESRI’s ArcGIS package, especially the WPF API, can be confusing. There’s the REST API, the SOAP APIs, and the WPF classes themselves, which expose some web service calls and information, but not everything. With all that, it can be hard to find specific features between the different options. Some functionality is handed to you on a silver platter, while some is maddeningly hard to implement. Today, for instance, I was working on adding a Legend control to my map-based WPF application, to explain the different symbols that can appear on the map. This is how the legend looks on ESRI’s own map-editing tools:   but this is how it looks when I used the Legend control, supplied out of the box by ESRI:   Very pretty, but unfortunately missing the option to display the name of the fields that make up the symbology. Luckily, the WPF controls have a lot of templating/extensibility points, to allow you to specify the layout of each field: 1: 2: 3: 4: 5: 6: 7: but that only replicates the same built in behavior. I could now add any additional fields I liked, but unfortunately, I couldn’t find them as part of the Layer, GraphicsLayer or FeatureLayer definitions. This is the part where ESRI’s lack of organization is noticeable, since I can see this data easily when accessing the ArcGis Server’s web-interface, but I had no idea how to find it as part of the built-in class. Is it a part of Layer? Of LayerInfo? Of the LayerDefinition class that exists only in the SOAP service? As it turns out, neither. Since these fields are used by the symbol renderer to determine which symbol to draw, they’re actually a part of the layer’s Renderer. Since I already had a MyFeatureLayer class derived from FeatureLayer that added extra functionality, I could just add this property to it: 1: public string LegendFields 2: { 3: get 4: { 5: if (this.Renderer is UniqueValueRenderer) 6: { 7: return (this.Renderer as UniqueValueRenderer).Field; 8: } 9: else if (this.Renderer is UniqueValueMultipleFieldsRenderer) 10: { 11: var renderer = this.Renderer as UniqueValueMultipleFieldsRenderer; 12: return string.Join(renderer.FieldDelimiter, renderer.Fields); 13: } 14: else return null; 15: } For my scenario, all of my layers used symbology derived from a single field or, as in the examples above, from several of them. The renderer even kindly supplied me with the comma to separate the fields with. Now it was a simple matter to get the Legend control in line – assuming that it was bound to a collection of MyFeatureLayer: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: and get the look I wanted – the list of fields below the layer name, indented.[...]

The Case of the Unexpected Expected Exception

Thu, 01 Mar 2012 19:20:00 GMT

“NUnit is being problematic again”, they told me when I came to visit the project. “When running unattended it’s not catching assertions properly and the test is coming up green, but when stepping through in the debugger, it works fine.”. It’s nice, when getting a passing test is acknowledged as a bad thing, at least when you don’t expect it to be. In this case, though, the fault wasn’t really with NUnit.

  1: [Test]
  2: [ExpectedException]
  3: public void DoTheTest()
  4: {
  5:     _myComponent.RunMethod();
  6:     Assert.IsFalse(_myComponent.EverythingIsFine);
  7: }

“It’s simple. Either the method throws an exception, or at the very least – the EverythingIsFine property won’t be set to “True”, so the assert will catch the problem. But in their case, no exception was thrown and Everything wasn’t Fine,  but the Assert call wasn’t raising a red flag – unless they stepped through, in which case it did. What’s going on?

The basic problem is that to many developers, NUnit is a kind of magic. You write a self-contained little bit of code, the [Test] method, but you don’t call it yourself, you don’t get a feel for the whole execution flow. The result – developers don’t exercise the same sort of judgement they do on their own application code.

The root of the problem here is that the [ExpectedException] attribute told NUnit to pass the test if an exception is thrown. NUnit’s Assertion utilities, however, use exceptions as the mechanism for failing tests – when an assertion is hit, it raises an exception – it can be an AssertionException. For various mock frameworks, it can be an ExpectationException. It doesn’t matter – it’s these exceptions that make the test fail, and not some behind-the-scenes magic. Because the test had an open-ended [ExpectedException] attribute, these exceptions were caught, fulfilling the condition, and NUnit was happy.

What can we do to avoid this?

  • Be explicit. Don’t try to catch ALL exceptions with [ExpectedException]. If you’re expecting an exception, you’re probably expecting a specific exception. Specify it.
  • Be aware of how your tools work. If NUnit works by throwing an exception, don’t wrap it with a try/catch. Your tests are C# code too, as is the plumbing to enable it. It plays by the same rules.

Open Source Vigilantes?

Sun, 26 Feb 2012 10:37:20 GMT

Moq, Callbacks and Out parameters: a particularly tricky edge case

Fri, 07 Oct 2011 00:16:03 GMT

In my current project we’re using Moq as our mocking framework for unit testing. It’s a very nice package – very simple, very intuitive for someone who has his wrapped around basic mocking concepts, but tonight I ran into an annoying limitation that, while rare, impeded my tests considerably. Consider the following code: 1: public interface IOut 2: { 3: void DoOut (out string outval); 4: } 5: 6: [Test] 7: public void TestMethod() 8: { 9: var mock = new Mock(); 10: string outVal = ""; 11: mock.Setup(out => out.DoOut(out outVal)) 12: .Callback(theValue => DoSomethingWithTheValue(theValue)); 13: } What this code SHOULD do is create a mock object for the IOut interface that, when called with an out parameter, both assigns a value to it, and does something with the value in the Callback. Unfortunately, this does not work. The reason it doesn’t work is that the Callback method has many overloads, but they are all to various variants of the Action<> delegate. Either Action, which receives one parameter, or to Action, which receives three. None of these support out parameters. But none of these delegates has a signature with an out parameter. The result, at runtime, would be an error to the effect of “Invalid callback parameters on object ISetup” Note the highlighted bits – The Setup method referred to a string& (a ref/out param), while the Callback inferred an Action delegate, which expectes a regular string param. So what CAN we do? The first option is submit a patch to the Moq project. It’s open-source, and a solution might be appreciated, especially since the project’s lead, Daniel Cazzulino, pretty much acknowledged that this is not supported right now. I might do that later, but right now it’s the middle of the night and I just want my tests to pass. So I manage to hack together this workaround which does the trick: 1: public static class MoqExtension 2: { 3: public delegate void OutAction (out TOut outVal); 4: public static IReturnsThrows 5: OutCallback 6: (this ICallback mock, 7: OutAction action) 8: where TMock : class 9: { 10: mock.GetType() 11: .Assembly.GetType("Moq.MethodCall") 12: .InvokeMember("SetCallbackWithArguments", 13: BindingFlags.InvokeMethod 14: | BindingFlags.NonPublic 15: | BindingFlags.Instance, 16: null, mock, new object[] {action}); 17: return mock as IReturnsThrows; 18: } 19: }   We’ve created a new delegate, OutAction, that has out T as its parameters, and we’ve added a new extension method to Moq, OutCallback, which we will use instead of Callback. What this extension method does is use brute-force Reflection to hack into Moq’s internals and call the SetCallbackWithArguments method, which registers the Callback with Moq. Yes, it even works. I’m as surprised as you are. Note and Caveats: 1) This code works for the very specific void Function (out T) signature. You will have to tailor it to the specific methods you want to mock. To make it generic, we’ll have to create a bucket-load of OutAction overloads, with different combinations of out and regular parameters. 2) This is very hacky and very fragile. If Moq’s inner implementation changes in a future version (the one I’m using is 4.0.10827.0), it will break. 3) I don[...]

WPF: Snooping Attached Properties

Thu, 01 Sep 2011 11:28:20 GMT

When developing WPF applications, Snoop is a wonderful tool that can let us see our visual tree at runtime, and find errors in data binding that are otherwise hard to track down in debugging. However, Snoop has an annoying limitation – it can’t show us data-bound Attached Properties. Let’s say we have the following XAML:

Saving Your Settings

Tue, 10 Feb 2009 14:38:51 GMT

Tomorrow I am bidding my laptop farewell, and will soon buy me a new one. This results in what I like to call Settings Anxiety. I spend months tuning various aspects of my computer to my liking, and starting afresh with a new machine is wearisome. There are solutions to this, of course. I can make a full settings backup using Windows Easy Transfer or some such tool to store my settings and registry information and reload them on my new computer. I am, however, leery of it. First of all, I don’t trust WET to backup everything, especially 3rd party settings and application data. Secondly, I don’t want to copy all the cruft that’s accumulated in my user settings. One advantage to leaving the old computer is to start fresh. So my compromise is to do a set of manual exports for various pieces of software, to carry it with me to the new computer, whenever it arrives. I’m going to list the various programs I use here that I bothered to customize, and detail the steps needed to do a manual export. This way I can come back to it the next time I reformat. 1. Outlook 2007 I use Outlook to connect to an Exchange server and to Gmail via IMAP. Both of these use OST files for local offline caching, but the primary storage is on the server. No real need for me to back this up. Unfortunately, there’s no way to export my Outlook profile’s entire Accounts list to a file, and reimport it later. It means I always have to go to Gmail’s help page to remember the port numbers they use for encrypted IMAP and SMTP, but it’s not too much of a hassle. 2. Visual Studio 2008 Two important customizations here – color scheme (I like grey background) and keyboard shortcuts (single-click build-current-project ftw!). Both are easily exported using Tools –> Import and Export Settings –> Export selected environment settings. 3. Google Chrome Bookmarks, cookies and stored passwords – things I really don’t feel like retyping again and again after I got my Chrome to memorize it for me. Basically, with all major browsers importing favorites and settings from one another, you only really need to backup one browser’s settings. Unfortunately, Google haven’t quite gotten around to implementing Export Settings in Chrome – like a lot of other things. To backup my settings, I am copying the C:\Users\username\AppData\Local\Google\Chrome\User Data folder. There’s a lot of useless junk there – the cache, Google Gears data – but it should contain all the interesting bits too. 4. RSSBandit RSSBandit’s “Export Feeds” option will probably export the feed definitions, but not the actual text of the actual feeds that I already have, offline, in my reader. It does support exporting the list of items in a feed, including read/unread state, though. But a better solution, assuming I am unbothered by space or time, is to use RSS Bandit’s “Remote Storage” option to save the entire feed database to a ZIP file that can be “downloaded” into a new installation. Under Tools –> Options –> Remote Storage set the destination (I use a file share), then do Tools –> Upload Feeds. When recovering, I’ve found that RSS Bandit doesn’t always succeed in reading the ZIP file – I got a “file was closed” error. What I did was simply open the ZIP file and copy all the XML files into the %AppData%\RssBandit folder, et voila. 5. Windows Live Writer Another useful tool that wasn’t given an Export Settings option. There are two things to backup here: 1. My blog settings – I have 5 different blogs I manage through WLW. Here I had to go back to the Registry. Funny how it seems so old-fashioned. Export all settings from HKEY_CURRENT_USER\Software\Microsoft\Windows Live\Writer. 2. Blog drafts and recently-written posts are stored in My Do[...]

Tales from the Unmanaged Side – System.String –> char* (pt. 2)

Tue, 13 Jan 2009 15:04:17 GMT

A while ago, I posted an entry about marshalling a managed System.String into an unmanaged C string, specifically a plain char*. The solution I suggested, back in 2007, involved calling Marshal::StringToHGlobalAuto method to allocate memory and copy the string data into it, and then cast the HGlobal pointer into a char*. It seems that in Visual Studio 2008 a new way of doing it was added, as part of the new Marshalling Library. This library provides a whole set of conversions between System.String and popular unmanaged string representations, like char*, wchar_t*, BSTR, CStringT and others I am even less familiar with. The smarter string representations, like std::string, have their own destructors so I can carelessly let them drop out of scope without worrying about leaks. The more primitive ones, like char*, need to be explicitly released, so that’s why the Marshalling Library contains a new (managed) class called marshal_context which gives me exactly this explicit release. Let’s compare my old code with the new: 1: const char* unmanagedString = NULL; 2: try 3: { 4: String^ managedString = gcnew String("managed string"); 5: // Note the double cast. 6: unmanagedString = (char*)(void*)Marshal::StringToHGlobalAnsi(managedString); 7: } 8: finally 9: { 10: // Don't forget to release. Note the ugly casts again. 11: Marshal::FreeHGlobal((IntPtr)(void*)unmanagedString); 12: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the new: 1: marshal_context^ context = gcnew marshal_context(); 2: String^ managedString = gcnew String("managed string"); 3: const char* unmanagedString = context->marshal_as( managedString );   Much shorter, I’m sure you’ll agree. And neater – no need for all the icky, icky casting between different pointer types. And most importantly, I don’t have to explicitly release the char* – the marshal_context class keeps a reference to all the strings that were marshalled through it, and when it goes out of scope its destructor makes sure to release them all. Very efficient, all in all.[...]

Static Constructor throws the same exception again and again

Tue, 16 Dec 2008 10:17:39 GMT

Here’s a little gotcha I ran into today – if you have code in a class’s static constructor that throws an exception, we will get a TypeInitializationException with the original exception as the InnerException – so far, nothing new. However, if we keep on calling methods on that object, we’ll keep receiving TypeInitializationExceptions. If it cannot be initialized, it cannot be called. Every time we try, we’ll receive the exact same exception. However, the static ctor will not be called again. What appears to happen is that the CLR caches the TypeInitializationException object itself, InnerException included, and rethrows it whenever the type is called. What are the ramifications? Well, we received an OutOfMemoryException in our static ctor, but the outer exception was caught and tried again. So we got an OutOfMemoryException again, even though the memory problem was behind us, which sent us down the wrong track of looking for persistent memory problems. Theoretically, it could also be a leak – the inner exception holding some sort of reference that is never released, but that’s an edge case. Here’s some code to illustrate the problem. The output clearly shows that while we wait a second between calls, the inner exception contains the time of the original exception, not any subsequent calls. Debugging also shows that the static ctor is called only once. 1: class Program 2: { 3: static void Main(string[] args) 4: { 5:   6: for (int i = 0; i < 5; i++) 7: { 8: try 9: { 10: Thread.Sleep(1000); 11: StaticClass.Method(); 12:   13: } 14: catch (Exception ex) 15: { 16: Console.WriteLine(ex.InnerException.Message); 17: } 18: } 19:   20: Console.ReadLine(); 21: } 22: } 23:   24: static class StaticClass 25: { 26: static StaticClass() 27: { 28: throw new Exception("Exception thrown at " + DateTime.Now.ToString()); 29: } 30:   31: public static void Method() 32: { } 33:   34: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }[...]

Upgrading all projects to .NET 3.5

Sun, 25 May 2008 08:56:14 GMT

A simple macro to change the Target Framework for a project from .NET 2.0 to .NET 3.5, hacked together in a few minutes. This will fail for C++/CLI projects, and possibly VB.NET projects (haven't checked). Works fine for regular C# projects, as well as web projects:


For Each proj As Project In DTE.Solution.Projects
       proj.Properties.Item("TargetFramework").Value = 196613
       Debug.Print("Upgraded {0} to 3.5", proj.Name)
   Catch ex As Exception
       Debug.Print("Failed to upgrade {0} to 3.5", proj.Name)
   End Try

Next proj

Why 196613, you ask? Well, when I see such a number my first instinct is to feed it into calc.exe and switch it to hexadecimal. And I was right: 196613 in decimal converts to 0x00030005 - You can see the framework version hiding in there. Major version in the high word, minor in the low word. The previous TargetFramework number was 131072 - 0x00020000, or 2.0.


(Nitpickers might point out that I could simply set it to &30005 rather than messing with the obscure decimal number. They would be correct - but I got to this number through the debugger, so that's how it will stay)

Sneaking past the SharePoint Usage Log

Wed, 16 Apr 2008 15:13:16 GMT

I'm writing a piece of code that regularly polls a MOSS server and checks several files. This can happen quite a few times a day, and this totally skews my Usage Reports for my sites.

Does anyone know of a way to tell MOSS to ignore calls made by a specific user, a specific IP (my code runs on a dedicated server), a specific UserAgent or whatever?

Any signs of life for the SharePoint Extensions for Visual Studio?

Wed, 23 Jan 2008 19:57:31 GMT

Has anything been released - or, in fact, talked about - since August's release of the v1.1 CTP?

Haven't done any web-part development in a while and wanted to get back in the game. I last used the v1.0 extensions, and was surprised that nothing much has changed in that field except for the CTP release, and even that can't be downloaded - I just get a broken link.

Seeking advice - strong names and config files

Thu, 20 Dec 2007 11:48:49 GMT

I'll use my blog for a bit of fishing for advice and guidance on an issue that's been bugging me.

We've been moving towards using strong names on all of our assemblies. The benefits are obvious, and it's a must before we deploy to clients out in the wild.

The problem is that we have several different processes running, each with its own app.config or web.config file. These config files contain references to custom configuration sections, whether they're application configuration, Enterprise Library extensions or whatnot. Seeing as my DLLs are signed, I have to use the fully qualified assembly name in all my references. This means that in a nightly build scenario where my version number is bumped continuously, I have to change 5-6 references in 5-6 configuration files with every build.

Doing this kind of string manipulation on a large scale scares me, since it can break, or we miss something. I've tried using and directives, but they require a specific version to point to as well.


I'm sure I'm not the first person to encounter this problem. What are the solutions that you use to bypass this? Scripts as part of the automated installation? Moving all configuration sections to a separate assembly whose version is static? What's the least painful way to manage this?

System.Diagnostics.EventLogEntry and the non-equal Equals.

Sun, 25 Nov 2007 18:12:21 GMT

Just a quick heads-up in case you're stumped with this problem, or just passing by:

The System.Diagnostics.EventLogEntry class implements the Equals method to check if two entries are identical, even if they're not the same instance. However, contrary to best practices, it does NOT overload the operator==, so these two bits of code will behave differently:

EventLog myEventLog = new EventLog("Application");
EventLogEntry entry1 = myEventLog.Entries[0];
EventLogEntry entry2 = myEventLog.Entries[0];

bool correct = entry1.Equals(entry2);
bool incorrect = entry1 == entry2;

After running this code, correct will be true while incorrect will be false.

Good to know, if you're reading event logs in your code.

Displaying live log file contents

Wed, 21 Nov 2007 14:12:54 GMT

I'm writing some benchmarking code, which involves a Console application calling a COM+ hosted process and measuring performance. I want to constantly display results on my active console, but since some of my code is running out-of-process, I can't really write directly to the console from all parts of the system. Not to mention the fact that I want it logged to a file as well. So I cobbled together a quick Log class that does two things - it writes to a shared log file, keeping no locks so several processes can access it (I do serialize access to the WriteLine method itself, though). I don't mind the overhead of opening/closing the file every time, since this isn't production code. The second method is the interesting one - it monitors the log file and returns every new line that is appended to it. If no lines are available, it will block until one is reached. The fun part was using the yield return keyword, which I've been looking for an excuse to use for quiet a while now. Note that there are many places this code can go wrong or should be improved. There is no way to stop it running, only when the application is stopped. I bring this as the basic idea, and it can be cleaned up and improved later: 1: public static IEnumerable ReadNextLineOrBlock() 2: { 3: // Open the file without locking it. 4: using (FileStream logFile = new FileStream(Filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite)) 5: using (StreamReader reader = new StreamReader(logFile)) 6: { 7: while (true) 8: { 9: // Read the next line. 10: string line = reader.ReadLine(); 11: if (!string.IsNullOrEmpty(line)) 12: { 13: yield return line; 14: } 15: else 16: { 17: Thread.Sleep(100); 18: } 19: } 20: } 21: } This method can now be called from a worker thread:   1: private static void StartListenerThread() 2: { 3: Thread t = new Thread(delegate() 4: { 5: while (true) 6: { 7: foreach (string line in Log.ReadNextLineOrBlock()) 8: { 9: // I added some formatting, too. 10: if (line.Contains("Total")) 11: Console.ForegroundColor = ConsoleColor.Green; 12: 13: Console.WriteLine(line); 14:  15: Console.ForegroundColor = ConsoleColor.White; 16: } 17: } 18: }); 19:  20: t.Start(); 21: }[...]

Another minor C#/C++ difference - adding existing files

Sun, 28 Oct 2007 14:35:24 GMT

Another minor detail that bit me for a few minutes today. I'm posting this so I'll see it and remember, and possibly lodge it in other people's consciousness.

Using Visual Studio 2005, adding an existing .cs file to a C# project will cause a copy of that file to be created in the project directory. If I want to link to the original file, I need to explicitly choose Add As Link from the dropdown on the Add button.

C++ projects, however, have the opposite behavior. Adding an existing item in a different folder will add a link to the original file in the original location. To create a copy, manually copy it to the project dir.

That is all.