Subscribe: Peter Johnson's Blog
Added By: Feedage Forager Feedage Grade B rated
Language: English
bit  code  error  file  found  key  microsoft  mvc  net  new  project  settings  studio  visual studio  visual  windows 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Peter Johnson's Blog

Peter Johnson's Blog

ASP.NET and other technologies


Updating dictionary keys and GetHashCode

Thu, 16 Nov 2017 23:26:00 GMT

I just came across an interesting problem looking over a coworker's code. He was calling external code that returned a populated Dictionary. The key of the dictionary is a custom class with simple int properties that implements IEquatable, kind of like a composite key, but only part of the key was populated. He needed all components of the key populated, so he looped through its items and populated those keys.

Later, he attempted to find a value in the dictionary calling Contains/ContainsKey, and it never found it. The keys appeared to match in the debugger; each property of the key class instances was equal. He called GetHashCode in the debugger, and the hash codes matched between what was in the dictionary and the Contains parameter he was looking for. When he replaced the Contains call with a call to Any using the == operator, it found it. But then when he attempted to get the value using the indexer, it couldn't find it. Why could the == operator (and Equals method) find the key, but the Contains method and indexer could not?

As we talked through it and saw in Reference Source how the Dictionary methods were implemented, I realized the problem. When the external code initially populated the Dictionary, the Dictionary internally got the hash code for each key, and used that to store the value in the correct bucket. But when he changed a component of the key, the Dictionary had no way to know the hash code (and thus the bucket it was in) could have changed as well. Even though calling GetHashCode on the updated key returns a matching value, the hash code the Dictionary stores internally is out of date, and doesn't match the key sought. The == operator works because it always looks at the current values of the key's properties, while Contains and the indexer calculate the hash code for the parameter key only, not the key in the dictionary. In this case, it's easier & cleaner to just create a new dictionary with the fully populated keys, so I suggested he change that, and then everything worked as expected.

That's the approach I tend to use anyway unless there are specific resource, performance, or functional reasons to do otherwise, but it's interesting & fun to come across problems like this occasionally that force you to really dig into .NET code you take for granted & see how it's working under the covers. Kudos for Microsoft's .NET team's direction over the last few years towards open source, and making tools like Reference Source available for quick lookups rather than digging into ildasm or 3rd party tools. Were this project on .NET Core, we could look at the code in GitHub directly.

tl;dr When you have a dictionary with composite keys, don't update those keys after adding them to the dictionary or you're asking for trouble.

Struct vs. class

Mon, 09 Oct 2017 15:33:00 GMT

Structs seem to be largely ignored in .NET. Everyone uses them (Int32 is pretty common!), but rarely pay attention to the fact that they're structs instead of classes, and rarely need to. I've worked on several projects where I don't think a single custom struct was defined. A good .NET/C# programmer should understand the differences between classes and structs as they consume them, but when creating a new type, most folks just create a class without a second thought (a class is usually the right answer anyway).

I came across a case where a new type was created, mostly for use in custom database access code, to represent a composite database key. My first thought was that a struct was more appropriate; conceptually, it's a single "value" that uniquely identifies the data, it would be created and consumed and compared with all properties assigned, and is small and not expected to grow. Microsoft has guidance on when to create a struct or a class, and it met all the requirements but one:

  • It has an instance size under 16 bytes.

This key had 3 ints & 2 int-based enums, for a total of 20 bytes. That's close enough to 16, right? That's not a hard-and-fast rule, is it? Some research brought me to analysis & testing results from Stack Overflow user Göran Andersson, who found the compiler treats structs differently when they cross the 16 byte threshold (at least in the .NET 4.0 timeframe).

It helps to understand the docs didn't just pick 16 bytes as an arbitrary breaking point. I'd be curious to know if versions of the compiler(s) released in the 7 years since have changed this behavior--one user claims it's grown to 20 bytes for x86 and 24 bytes for x64 as of 2014--but those performance tests will have to wait for now.

Blockchain basics & how it could change everything

Sat, 12 Aug 2017 18:07:00 GMT

One topic I recently dove into in my continuing tech education is blockchain.

It was a little rough getting off the ground. Like many, I had a vague idea what Bitcoin is, that it's supposed to be anonymous & secure, but didn't really understand how or why that worked. I watched a couple videos on it that glossed over the basics & left me with more questions than answers.

I recently found two resources that helped a ton with the basics, and helped me to understand why people call blockchain the biggest thing since the invention of the Internet.

If you listen to software development podcasts, you should listen to SE-Radio. They cover a broad range of technologies & stay topical. They had an episode (1 hour) last month on blockchain, where developer Kishore Bhatia talks with Kieren James-Lubin at BlockApps. In addition to the basics--what's a block? what is mining?--it really clarified where "blockchain" and Bitcoin stop & Ethereum starts, & why we should care about the blockchain implementation Ethereum.

TED Talks has a few good blockchain talks. I found this one by Don Tapscott (20 minutes) very helpful. Everything I've seen on blockchain drops at least a couple suggestions that get me excited about how revolutionary this could be; this TED Talk spent maybe half the video on that & it was jaw-dropping. You can also search for "ted talks blockchain" & find some TEDx Talks on YouTube, though I haven't watched these yet.

I believe all software developers, if not everyone, owes it to themselves to spend at least a few minutes to understand this technology & see how it could change the world, before it does.

Office 365 Outlook - goodbye to conversation view

Sat, 06 May 2017 03:38:02 GMT

A couple years ago, one of my email addresses was transitioned from the classic Outlook Web Access to Office 365. I appreciated the cleaner & more modern looking interface, but noticed a few steps backward that made me scratch my head. Things that seemed easy & intuitive in the old OWA, and in Hotmail, were harder or even impossible in Office 365 Outlook.

I'm not a fan of conversation view. That and the reading pane are the first things to go when I use a new Outlook installation. I spent time looking for the setting to turn off conversation view, to no avail. Settings -> Display Settings -> Message List, or Conversations, seemed like perfectly reasonable places to put such a setting, yet there was none to be found. Aghast, I figured Microsoft just left it out, relatively young product as it is, and put up with it for a couple years. Reading threads became an adventure, wondering who had said what, who replied inline to the previous message and/or added text above the quoted reply, marking the whole thread as read when I only have the time (or Internet connection) to read one or two messages.

Today, the nightmare ended, as I finally figured out the right way to ask Google how to turn it off.

Microsoft's Office support site shows one way, but because they like to break links and because that refers to some other version than what I'm using, I took a screenshot of where the magic setting was for me, hiding in plain sight. Forget the Settings menu, and look for the Filter drop down menu at the far right above the list of messages--er, conversations.


Happily, the mobile web site (which has even fewer settings to adjust) reflects this change as well.

Unsigned integral types - ushort, uint, and ulong

Mon, 12 Oct 2015 18:44:09 GMT

If you've dug into the .NET documentation much, you've probably seen the framework has unsigned versions of the integral types we use all the time, and C# considers them built-in types with language keywords to easily use them--ushort, uint, and ulong. If you haven't, you may be unfamiliar with them--in 14 years of C# development, I could probably count on a hand or two the number of times I've seen them in production code.

I'm adding a new property to a data object, a count which will never exceed 1000, so I figured ushort was a fitting choice, since it requires less memory & ensures a negative value will never be set. But I took the opportunity to research a bit why we don't see those more often, especially in .NET Framework types, like its Count properties.

I was surprised to learn these unsigned types are not CLS-compliant; the docs even recommend not using them if you can avoid it. That plus consistency with the .NET Framework outweighs the benefit of the runtime restricting negative values from getting assigned.

Furthermore, even using a short might be overkill on modern processors designed for 32-bit pointers, arithmetic, etc. If shorts get promoted to ints implicitly, it makes sense to use ints explicitly for standalone properties like this; trying to save a few bytes of storage could backfire if it gets allocated later anyway.

Updating bound WPF control without tabbing out

Fri, 28 Aug 2015 22:21:00 GMT

I've had an opportunity to do WPF development professionally (as opposed to hobby-tier) for the first time over the last year or so, & while the improvements over WinForms are staggering (and as a lifelong web developer, also familiar), there's plenty that's not intuitive.

One thing that bears a little explanation is this scenario, which I expect would be fairly common.


I have a data entry form following the MVP pattern (that's Model-View-Presenter, very similar to MVC in the web world), with text boxes & other controls bound to a C# object. It has a toolbar with a Save button & other buttons on it. The user enters some text, clicks Save, & it's sent to the server to be saved. That's it.


When I enter data into the text boxes, then click Save, it was ignoring my changes to the last text box I typed into. If I tab out of that text box, then clicked Save, it saved it fine, but what user is going to think to do that? My application should work the way the user would expect; entering data into a form & saving it shouldn't require training specific to one application.

Fortunately, I'm not the only one that had this problem. Like in the question, I knew I could solve this by adding UpdateSourceTrigger=PropertyChanged to my text box's binding, but I didn't want to do that--that didn't play well with the event handler with some fairly complex validation logic I had for one of the text boxes, nor the formatting for another. I didn't want to choose between text formatting & validation working correctly, and save working intuitively.

The accepted answer states this happens because the Save button doesn't take focus away from the text box, but only when the Save button is in a toolbar. When I took it out of the toolbar, it received focus & everything worked great. But the toolbar, and thus the button within it, has a different FocusManager.FocusScope, so the last text box retains focus even though the Save button gets focused as well. I considered changing this behavior, but again, if this is the standard users are used to for WPF apps, I can understand the benefits & wouldn't want to mess with that streamlined experience.


The answer lists several ways to fix the behavior. The one I chose, which felt the least like a hack & worked for this user control, was to manually force the update. When the save button is clicked, I call Keyboard.FocusedElement as TextBox, then TextBox.GetBindingExpression(TextBox.TextProperty).UpdateSource() (with all the requisite null checks, of course), then retrieve the data from the bound object to send to the server. It worked like a charm, and was easy to centralize in my base control class, though at first glance it looks like it'd be specific to each type of control you want to have this behavior--TextBox, ComboBox, etc. But in my application, there would only be a handful anyway, so that's just a different code path to get the BindingExpression for each--three lines of code for each control type.

MVC's Html.DropDownList and "There is no ViewData item of type 'IEnumerable' that has the key '...'

Fri, 26 Oct 2012 16:30:00 GMT

ASP.NET MVC's HtmlHelper extension methods take out a lot of the HTML-by-hand drudgery to which MVC re-introduced us former WebForms programmers. Another thing to which MVC re-introduced us is poor documentation, after the excellent documentation for most of the rest of ASP.NET and the .NET Framework which I now realize I'd taken for granted.

I'd come to regard using HtmlHelper methods instead of writing HTML by hand as a best practice. When I upgraded a project from MVC 3 to MVC 4, several hidden fields with boolean values broke, because MVC 3 called ToString() on those values implicitly, and MVC 4 threw an exception until you called ToString() explicitly. Fields that used HtmlHelper weren't affected. I then went through dozens of views and manually replaced hidden inputs that had been coded by hand with Html.Hidden calls.

So for a dropdown list I was rendering on the initial page as empty, then populating via JavaScript after an AJAX call, I tried to use a HtmlHelper method:


which threw an exception:

System.InvalidOperationException: There is no ViewData item of type 'IEnumerable' that has the key 'myDropdown'.

That's funny--I made no indication I wanted to use ViewData. Why was it looking there? Just render an empty select list for me. When I populated the list with items, it worked, but I didn't want to do that:

@Html.DropDownList("myDropdown", new List() { new SelectListItem() { Text = "", Value = "" } })

I removed this dummy item in JavaScript after the AJAX call, so this worked fine, but I shouldn't have to give it a list with a dummy item when what I really want is an empty select.

A bit of research with JetBrains dotPeek (helpfully recommended by Scott Hanselman) revealed the problem. Html.DropDownList requires some sort of data to render or it throws an error. The documentation hints at this but doesn't make it very clear. Behind the scenes, it checks if you've provided the DropDownList method any data. If you haven't, it looks in ViewData. If it's not there, you get the exception above.

In my case, the helper wasn't doing much for me anyway, so I reverted to writing the HTML by hand (I ain't scared), and amended my best practice: When an HTML control has an associated HtmlHelper method and you're populating that control with data on the initial view, use the HtmlHelper method instead of writing by hand.

How to create a new WCF/MVC/jQuery application from scratch

Sat, 22 Sep 2012 05:18:00 GMT

As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this:      

Oracle 64-bit assembly throws BadImageFormatException when running unit tests

Fri, 21 Sep 2012 16:04:00 GMT

We recently upgraded to the 64-bit Oracle client. Since then, Visual Studio 2010 unit tests that hit the database (I know, unit tests shouldn't hit the database--they're not perfect) all fail with this error message:

Test method MyProject.Test.SomeTest threw exception:
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format.

I resolved this by changing the test settings to run tests in 64-bit. From the Test menu, go to Edit Test Settings, and pick your settings file. Go to Hosts, and change the "Run tests in 32 bit or 64 bit process" dropdown to "Run tests in 64 bit process on 64 bit machine". Now your tests should run.

This fix makes me a little nervous. Visual Studio 2010 and earlier seem to change that file for no apparent reason, add more settings files, etc. If you're not paying attention, you could have TestSettings1.testsettings through TestSettings99.testsettings sitting there and never notice the difference. So it's worth making a note of how to change it in case you have to redo it, and being vigilant about files VS tries to add.

I'm not entirely clear on why this was even a problem. Isn't that the point of an MSIL assembly, that it's not specific to the hardware it runs on? An IL disassembler can open the Oracle.DataAccess.dll in question, and in its Runtime property, I see the value "v4.0.30319 / x64". So I guess the assembly was specifically build to target 64-bit platforms only, possibly due to a 64-bit-specific difference in the external Oracle client upon which it depends. Most other assemblies, especially in the .NET Framework, list "msil", and a couple list "x86". So I guess this is another entry in the long list of ways Oracle refuses to play nice with Windows and .NET.

If this doesn't solve your problem, you can read others' research into this error, and where to change the same test setting in Visual Studio 2012.

MVC 4 and the App_Start folder

Fri, 07 Sep 2012 21:04:00 GMT

I've been delving into ASP.NET MVC 4 a little since its release last month. One thing I was chomping at the bit to explore was its bundling and minification functionality, for which I'd previously used Cassette, and been fairly happy with it. MVC 4's functionality seems very similar to Cassette's; the latter's CassetteConfiguration class matches the former's BundleConfig class, specified in a new directory called App_Start.

At first glance, this seems like another special ASP.NET folder, like App_Data, App_GlobalResources, App_LocalResources, and App_Browsers. But Visual Studio 2010's lack of knowledge about it (no Solution Explorer option to add the folder, nor a fancy icon for it) made me suspicious. I found the MVC 4 project template has five classes there--AuthConfig, BundleConfig, FilterConfig, RouteConfig, and WebApiConfig. Each of these is called explicitly in Global.asax's Application_Start method. Why create separate classes, each with a single static method? Maybe they anticipate a lot more code being added there for large applications, but for small ones, it seems like overkill. (And they seem hastily implemented--some declared as static and some not, in the base namespace instead of an App_Start/AppStart one.) Even for a large application I work on with a substantial amount of code in Global.asax.cs, a RouteConfig might be warranted, but the other classes would remain tiny.

More importantly, it appears App_Start has no special magic like the other folders--it's just convention. I found it first described in the MVC 3 timeframe by Microsoft architect David Ebbo, for the benefit of NuGet and WebActivator; apparently some packages will add their own classes to that directory as well. One of the first appears to be Ninject, as most mentions of that folder mention it, and there's not much information elsewhere about this new folder.

Unity throws SynchronizationLockException while debugging

Fri, 13 Apr 2012 23:43:00 GMT

I've found Unity to be a great resource for writing unit-testable code, and tests targeting it. Sadly, not all those unit tests work perfectly the first time (TDD notwithstanding), and sometimes it's not even immediately apparent why they're failing. So I use Visual Studio's debugger. I then see SynchronizationLockExceptions thrown by Unity calls, when I never did while running the code without debugging. I hit F5 to continue past these distractions, the line that had the exception appears to have completed normally, and I continue on to what I was trying to debug in the first place.

In settings where Unity isn't used extensively, this is just one amongst a handful of annoyances in a tool (Visual Studio) that overall makes my work life much, much easier and more enjoyable. But in larger projects, it can be maddening. Finally it bugged me enough where it was worth researching it.

Amongst the first and most helpful Google results was, of course, at Stack Overflow. The first couple answers were extensive but seemed a bit more involved than I could pull off at this stage in the product's lifecycle. A bit more digging showed that the Microsoft team knows about this bug but hasn't prioritized it into any released build yet. SO users jaster and alex-g proposed workarounds that relieved my pain--just go to Debug|Exceptions..., find the SynchronizationLockException, and uncheck it. As others warned, this will skip over SynchronizationLockExceptions in your code that you want to catch, but that wasn't a concern for me in this case. Thanks, guys; I've used that dialog before, but it's been so long I'd forgotten about it.

Now if I could just do the same for Microsoft.CSharp.RuntimeBinder.RuntimeBinderException... Until then, F5 it is.

MVC's IgnoreRoute syntax

Thu, 11 Nov 2010 22:15:00 GMT

I've had an excuse to mess around with custom route ignoring code in ASP.NET MVC, and am surprised how poorly the IgnoreRoute extension method on RouteCollection (technically RouteCollectionExtensions, but also RouteCollection.Add, and RouteCollection.Ignore which was added in .NET 4) is documented, both in the official docs by Microsoft, and various bloggers and forum participants who have been using it, some for years.

We all know these work. The first is in Microsoft code; it and the second are about the only examples out there:

routes.Ignore("{*allaspx}", new {allaspx=@".*\.aspx(/.*)?"});

I understand the first ignores .axd requests regardless of what comes after the .axd part, and the second uses a regex custom constraint for more control over what's blocked. But why is pathInfo a special name that we don't have to define? What other special names could we use without defining? What does .* mean in a regex, if that really is a regex? . is exactly one instance, and * is zero or more instances, so putting them together doesn't make sense to me.

Phil Haack provides this equally syntactically confusing example to prevent favicon.ico requests from going through routing:

routes.IgnoreRoute("{*favicon}", new {favicon=@"(.*/)?favicon.ico(/.*)?"});

I also tried these in my code:


The first with URL's that ended in /myroute, and /myroute/whatever, but not /myroute/whatever?stuff=23. The second blocked all three of those, but not /somethingelse?stuff=myroute. Why does this work without putting the constant "myroute" in {} like the resource.axd example above? Is resource really a constant in that example, or just a placeholder for which we could have used any name? Do string constants need curly brace delimiters in some cases and not others? An example I found on Steve Smith's blog and on others shows the same thing:


This prevents any requests to the standard Content folder from going through routing. Why the curly braces?

Just to keep things interesting, when I tried to type my IgnoreRoute call above, I accidentally left out the slash:


which threw "System.ArgumentException: A path segment that contains more than one section, such as a literal section or a parameter, cannot contain a catch-all parameter." OK, so no slash means more than one section, and including a slash means it's only one section?

Has anyone else had more luck using this method, or found any way to go about it other than trial and error?

SQL Server 2005/2008's TRY/CATCH and constraint error handling

Mon, 04 Oct 2010 19:05:00 GMT

I was thrilled that T-SQL finally got the TRY/CATCH construct that many object-oriented languages have had for ages. I had been writing error handling code like this: BEGIN TRANSACTION TransactionName...-- Core of the script - 2 lines of error handling for every line of DDL codeALTER TABLE dbo.MyChildTable DROP CONSTRAINT FK_MyChildTable_MyParentTableIDIF (@@ERROR <> 0)    GOTO RollbackAndQuit...COMMIT TRANSACTION TransactionNameGOTO EndScript-- Centralized error handling for the whole scriptRollbackAndQuit:    ROLLBACK TRANSACTION TransactionName    RAISERROR('Error doing stuff on table MyChildTable.', 16, 1)EndScript:GO ...which gets pretty ugly when you have a script that does 5-10 or more such operations and it has to check for an error after every one. With TRY/CATCH, the above becomes: BEGIN TRANSACTION TransactionName;BEGIN TRY...-- Core of the script - no additional error handling code per line of DDL codeALTER TABLE dbo.MyChildTable DROP CONSTRAINT FK_MyChildTable_MyParentTableID;...COMMIT TRANSACTION TransactionName;END TRY-- Centralized error handling for the whole scriptBEGIN CATCH    ROLLBACK TRANSACTION TransactionName;    DECLARE @ErrorMessage NVARCHAR(4000) = 'Error creating table dbo.MyChildTable. Original error, line [' + CONVERT(VARCHAR(5), ERROR_LINE()) + ']: ' + ERROR_MESSAGE();    DECLARE @ErrorSeverity INT = ERROR_SEVERITY();    DECLARE @ErrorState INT = CASE ERROR_STATE() WHEN 0 THEN 1 ELSE ERROR_STATE() END;    RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState)END CATCH;GO Much cleaner in the body of the script; we have to do more work in the CATCH (wouldn't it be nice if T-SQL had a THROW statement like C# to rethrow the exact same error that was caught?), but the core of the script that makes DDL changes is cleaner and more readable. The only serious downside I've found so far is when dropping a constraint, and you get a message like this without TRY/CATCH: Msg 3728, Level 16, State 1, Line 1'FK_MyChildTable_MyParentTableID' is not a constraint.Msg 3727, Level 16, State 0, Line 1Could not drop constraint. See previous errors. (In this case I'd check for the constraint before trying to drop it; this is just for illustration. One I've seen more often is trying to drop a primary key and recreate it on a parent table when there are foreign keys on child tables that reference the parent.) TRY/CATCH shortens the above error to only "Could not drop constraint. See previous errors." with no previous errors shown. Now, some research will reveal why it couldn't drop the constraint, but TRY/CATCH is supposed to make error handling easier and more straightforward, not more obscure. The "See previous errors" line has always struck me as a lazy error message--I bet there's a story amongst seasoned SQL Server developers at Microsoft as to why this throws two error messages instead of one--so I imagine the real problem is in that dual error message more than the TRY/CATCH construct itself as it's implemented in T-SQL. If anyone has a slick way to get that initial Msg 3728 part of the error, I'm all ears; I've seen several folks ask this question and not one answer yet.[...]

Windows Workflow Foundation (WF) and things I wish were more intuitive

Wed, 26 May 2010 21:29:00 GMT

I've started using Windows Workflow Foundation, and so far ran into a few things that aren't incredibly obvious. Microsoft did a good job of providing a ton of samples, which is handy because you need them to get anywhere with WF. The docs are thin, so I've been bouncing between samples and downloadable labs to figure out how to implement various activities in a workflow. Code separation or not? You can create a workflow and activity in Visual Studio with or without code separation, i.e. just a .cs "Component" style object with a Designer.cs file, or a .xoml XML markup file with code behind (beside?) it. Absence any obvious advantage to one or the other, I used code separation for workflows and any complex custom activities, and without code separation for custom activities that just inherit from the Activity class and thus don't have anything special in the designer. So far, so good. Workflow Activity Library project type - What's the point of this separate project type? So far I don't see much advantage to keeping your custom activities in a separate project. I prefer to have as few projects as needed (and no fewer). The Designer's Toolbox window seems to find your custom activities just fine no matter where they are, and the debugging experience doesn't seem to be any different. Designer Properties - This is about the designer, and not specific to WF, but nevertheless something that's hindered me a lot more in WF than in Windows Forms or elsewhere. The Properties window does a good job of showing you property values when you hover the mouse over the values. But they don't do the same to find out what a control's type is. So maybe if I named all my activities "x1" and "x2" instead of helpful self-documenting names like "listenForStatusUpdate", then I could easily see enough of the type to determine what it is, but any names longer than those and all I get of the type is "System.Workflow.Act" or "System.Workflow.Compone". Even hitting the dropdown doesn't expand any wider, like the debugger quick watch "smart tag" popups do when you scroll through members. The only way I've found around this in VS 2008 is to widen the Properties dialog, losing precious designer real estate, then shrink it back down when you're done to see what you were doing. Really? WF Designer - This is about the designer, and I believe is specific to WF. I should be able to edit the XML in a .xoml file, or drag and drop using the designer. With WPF (at least in VS 2010 Ultimate), these are side by side, and changes to one instantly update the other. With WF, I have to right-click on the .xoml file, choose Open With, and pick XML Editor to edit the text. It looks like this is one way where WF didn't get the same attention WPF got during .NET Fx 3.0 development. Service - In the WF world, this is simply a class that talks to the workflow about things outside the workflow, not to be confused with how the term "service" is used in every other context I've seen in the Windows and .NET world, i.e. an executable that waits for events or requests from a client and services them (Windows service, web service, WCF service, etc.). ListenActivity - Such a great concept, yet so unintuitive. It seems you need at least two branches (EventDrivenActivity instances), one for your positive condition and one for a timeout. The positive condition has a HandleExternalEventActivity, and the timeout has a DelayActivity followed by however you want to handle the delay, e.g. a ThrowActivity. The timeout is simple enough; wiring up the HandleExternalEventActivity is where things get fun. You need to create a service (see above), and an interface for that [...]

Migrating from VS 2005 to VS 2008

Wed, 02 Dec 2009 17:03:00 GMT

I recently helped migrate a ton of code from Visual Studio 2005 to 2008, and .NET 2.0 to 3.5. Most of it went very smoothly; it touches every .sln, .csproj, and .Designer.cs file, and puts a bunch of junk in Web.Configs, but rarely encountered errors. One thing I didn't expect was that even for a project running in VS 2008 but targeting .NET Framework 2.0, it will still use the v3.5 C# compiler. As such, it does behave a bit differently than the 2.0 compiler, even when targeting the 2.0 Framework. One piece of code used an internal custom EventArgs class, that was consumed via a public delegate. This code compiled fine using the 2.0 C# compiler, but the 3.5 compiler threw this error: error CS0059: Inconsistent accessibility: parameter type 'MyApp.Namespace.MyEventArgs' is less accessible than delegate 'MyApp.Namespace.MyEventHandler' It's a goofy situation, the error makes perfect sense, and it was easy to correct (I made both internal), but I expected VS 2008 would use the compiler to match whatever the target .NET Framework version was. I wouldn't have expected any compilation errors it didn't have before conversion, not until I changed the targeted Framework version. Another funny error happened around code analysis. Code analysis ran fine in VS 2005, but in VS 2008, it threw this error (compilation error, not a code analysis warning): Running Code Analysis...C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Static Analysis Tools\FxCop\FxCopCmd.exe /outputCulture:1033 /out:"bin\Debug\MyApp.Namespace.MyProject.dll.CodeAnalysisLog.xml" /file:"bin\Debug\MyApp.Namespace.MyProject.dll" /directory:"C:\MyStuff\MyApp.Namespace.MyProject\bin\Debug" /directory:"c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727" /directory:"..\..\..\Lib" /rule:"C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Static Analysis Tools\FxCop\Rules" /ruleid:-Microsoft.Design#CA1012 /ruleid:-Microsoft.Design#CA2210 ... /searchgac /ignoreinvalidtargets /forceoutput /successfile /ignoregeneratedcode /saveMessagesToReport:Active /targetframeworkversion:v2.0 /timeout:120 MSBUILD : error : Invalid settings passed to CodeAnalysis task. See output window for details.Code Analysis Complete -- 1 error(s), 0 warning(s)Done building project "MyApp.Namespace.MyProject.csproj" -- FAILED. I especially like the "See output window for details," which 1. screams of a Visual Studio hack as it is, and 2. doesn't actually give me any more details in this particular case, though Google tells me that other people do get more information in the output window. I noticed Debug and Release modes both had code analysis enabled (I think switching Framework versions swapped them on me and I accidentally enabled it in Release mode), and Release mode wasn't erroring out but Debug was. I looked at the difference in the csproj file, and in the FxCopCmd.exe calls, and the key seemed to be the /ruleid parameters (bolded), of which there were a ton in Debug but not Release. Presumably this is because I disabled some of the rules in the project properties, so I tried enabling them all. The number of /ruleid params went down, but it still gave the same error. The Code Analysis tab in project properties looked the same between Debug and Release. Finally I unloaded the project, edited the csproj file (I'm glad I found out how to do this within VS, instead of exiting VS and editing it in Notepad), and removed this line, which was present in the Debug PropertyGroup element but not the Release one: -Microsoft.Design#CA1012;-Microsoft.Design#CA2210... Code analysis then ran successfully. I imagine this solution isn'[...]

Windows 7 rocks!

Wed, 02 Dec 2009 00:19:00 GMT

I bought my current PC almost three years ago. I've had my own PC for 15 years or so, and, aside from my first desktop and a laptop I only use when traveling, that was the only time I've bought a whole PC, rather than buying parts and assembling my own (a Frankenputer as former coworkers affectionately referred to them). Like many of my colleagues who work in Microsoft technologies, I looked into buying a Dell, and they had a fine deal, and more importantly, they had finally started selling AMD processors, which I can proudly say without qualification is the only CPU in any computer I've owned. I configured one with a dual core, 64-bit processor, and all sorts of new technologies I'd never heard of but were (and appear to still be) the latest and the greatest. ("What's SATA? We use PCI for video again?" I asked myself.) Windows Vista had RTM'd and was weeks from retail availability, and my PC included a deal to upgrade once Microsoft let Dell send upgrade discs. My PC had a rather small (160 GB, I think) hard drive, which I intended to replace with a 500 GB or so once Vista came out, installing it there fresh instead of trying to upgrade Windows--Windows upgrades have never worked so well for me, whereas fresh installs are fine. Then I heard all the complaining about Vista, and decided to hold off. I ran low on space before Vista SP1 came out, so got that second hard drive anyway and kept my photos there. From then on, Windows XP Professional worked "well enough" so I stuck with it. Things got bad a couple months before Windows 7 came out. First, Norton AntiVirus misbehaved. To be fair, the program was about 6-7 years old; I kept it around because it seemed to work well enough, I got it free as a student, and virus definition upgrades were free. Then I noticed the dates on the definitions went from a few days ago, to the middle of 1999. It still changed every week, and still found upgrades, so I'm guessing it was just a bug in how it displayed the definitions date, but still made me nervous, as did the prospect of uninstalling old and installing new virus scanners. Roxio was the next to act up. After Microsoft paved the way with Windows Update, suddenly every software manufacturer was convinced their product was just as important to check at least every week for updates, and the updates, just as urgent. Eventually I got Apple to quit bugging me to install Bonjour and Safari, but I couldn't get Roxio (or, perhaps more accurately, InstallShield Update Manager which came with Roxio) to quit prompting me to check for updates on the 0 products I had told it to check. I googled and finally found a tool I could use to uninstall that piece of it, without uninstalling Roxio, on InstallShield's support site. That was a mistake. It stopped prompting me, but added about 3 minutes from when Windows comes up after I start my PC, until my computer was usable, and in the meantime, Norton was disabled, Windows Firewall was disabled, and programs wouldn't start. Add to this a nagging problem where my SD/CompactFlash card reader thinks it's USB 1.x intermittently, and the ugly way Windows Search was grafted onto Windows XP, and the fact that XP (and earlier versions of Windows--not sure about 7 yet) just slows down after a couple years, and I knew it was time to upgrade once Windows 7 came out. The more I learned about Windows 7 (and, to be fair, much of it was new in Vista and largely unchanged in 7, but I'd barely ever used Vista), the more I liked it. The way search worked much faster, more efficiently, and was integrated into everything, even the Start Menu (no more reorganizing each program's 20 or so icons so I could find the ones [...]

TFS deleted files still show up in Source Control Explorer

Wed, 02 Dec 2009 00:02:00 GMT

One problem I've had in Team Foundation Server since Visual Studio 2005 and still in VS 2008 is when items are deleted by someone else, they still show up in Source Control Explorer, with a gray folder with a red X icon, even with "Show deleted items in the Source Control Explorer" unchecked in VS's Options dialog. Sometimes getting latest of the parent clears things up, but other times it doesn't, even with Get Specific Version with both Overwrite boxes checked to force a get. In this case, the only option I've found is to delete my workspace and recreate it, which means checking in everything beforehand, and getting latest of my working branches afterwards. It's a pain, but as specified here and approved by a Microsoft employee, that may be your only option until it's fixed--fingers crossed for VS 2010. (We won't get into the other things for which my fingers have been crossed since I first used TFS in 2005, things that VSS did just fine, such as rollback, check in changes and keep checked out, and search.)

Anyone have any better solutions? Deleting and recreating your workspace seems a bit drastic.

VS 2008 and .NET 3.5 Beta 2 released, with Go Live

Thu, 26 Jul 2007 21:18:00 GMT

It's official! In one of the first of a few dozen posts you'll read about it, Scott Guthrie announces Visual Studio 2008 and the .NET Framework 3.5 Beta 2 have been released.

Microsoft Sandcastle

Tue, 03 Jul 2007 16:27:00 GMT

In working with my company's offshore developers, I was tasked with providing them documentation on a set of class libraries we use in our applications. In the .NET 1.0/1.1 time frame, we used NDoc, which, sadly, passed away last year, to turn the XML comments output by the C# compiler into CHM help files. After a bit of googling and a false start, I discovered Sandcastle, which Microsoft uses to build the .NET Framework documentation itself. I also discovered from the Sandcastle blog that it takes a whole mess of manual steps to use, which appeared daunting at first glance, and, being a programmer, I was looking for an easier (lazier) way.

From the official Sandcastle download page, I found the Sandcastle Wiki, and from there, an NDoc-like, Visual Studio-like GUI for it creatively titled Sandcastle Help File Builder. Setting up a SHFB project and getting the documentation to compile, and then to look/behave almost exactly like I envisioned, was simple at this point.

Overall, I'm pretty impressed how easy it was to discover this and find resources to use it--a lot easier than it used to be to fill a component/tool need that Microsoft claims to address (some of the data access pieces in the Visual InterDev 6.0 time frame come to mind). Really, the hardest part was getting the Google terms right!

Again, you can download Sandcastle here. The latest version is the June 2007 CTP (Community Technology Preview, which you probably already know means it's pre-release), released a couple weeks ago.

HTTP modules - subdirectories and private variables

Fri, 02 Mar 2007 21:29:00 GMT

I recently finished (for now--there's always more to do) one of the more complex HTTP modules I've worked on. I have an application first written in the ASP.NET 1.0 beta 2 time frame that's since been upgraded to 1.0, 1.1, and now 2.0. It had a lot of custom authentication and error handling code in global.asax, and for general architecture and server management purposes, I wanted to move this code into separate HTTP modules. I ran into a couple gotchas I wanted to document.Lesson 1: You can't disable an HTTP module for a subdirectory. I wanted to remove the HTTP module for one subdirectory using the configuration element, and while it let me put it in my web.config fine and never threw an error a la "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level.  This error can be caused by a virtual directory not being configured as an application in IIS.", when I ran it, it went into the module's code as if that config section wasn't there.Scott Guthrie explained to me why: "HttpModules are application specific. When an application starts up, a number of HttpApplication's get configured (one for each logical thread that will execute in the application), and HttpModules are created and assigned to them. That is why you need to configure them at the application root (or higher) level in config. This enables much better pooling of resources." He pointed out that other HTTP modules in ASP.NET handle this with custom configuration sections, which is more work than I was looking at doing for this release.An exception to this rule is if that subdirectory itself is configured as an IIS application, but in many cases (including mine), this is more trouble than it's worth, limiting what user controls it can see, requiring its own bin directory... and if you're going to do all that, it's probably going to need its own web.config file anyway. Lesson 2: You shouldn't use private member variables in an HTTP module. I've been an ASP.NET programmer long enough that I thought I could figure out whether member variables were safe or not. I could argue for it either way, and I wasn't feeling ambitious enough to wade through all that code in Reflector. I finally found official documentation for the HttpApplication class that said, "One instance of the HttpApplication class is used to process many requests in its lifetime; however, it can process only one request at a time. Thus, member variables can be used to store per-request data." So it made sense that if an HTTP module is wired to a particular HttpApplication instance, the same rule would apply, and member variables would be safe in HTTP modules as well.Again,  Scott helped me out by advising me against member variables: "If you have an async operation occur during the request, I believe ASP.NET might switch the HttpModule to another thread to execute. That is my worry with storing local variables. It might work in your dev environment, but generate different results under high-load on a server." And no one needs any more "works great in dev and QA but not production" sorts of issues. He advised storing such things in HttpContext.Items instead, but in my case, the data was so cheap to calculate and consumed rarely enough that I decided to just have a method to calculate it every time. Interestingly, by switching from global.asax (where member variables were fine) to an HTTP module, I took a s[...]