Subscribe: Peter Johnson's Blog
http://weblogs.asp.net/pjohnson/rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
application  bit  code  error  file  framework  iis  microsoft  net  new  project  studio  visual studio  visual  web  windows 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Peter Johnson's Blog

Peter Johnson's Blog



ASP.NET and other technologies



 



Unsigned integral types - ushort, uint, and ulong

Mon, 12 Oct 2015 18:44:09 GMT

If you've dug into the .NET documentation much, you've probably seen the framework has unsigned versions of the integral types we use all the time, and C# considers them built-in types with language keywords to easily use them--ushort, uint, and ulong. If you haven't, you may be unfamiliar with them--in 14 years of C# development, I could probably count on a hand or two the number of times I've seen them in production code.

I'm adding a new property to a data object, a count which will never exceed 1000, so I figured ushort was a fitting choice, since it requires less memory & ensures a negative value will never be set. But I took the opportunity to research a bit why we don't see those more often, especially in .NET Framework types, like its Count properties.

I was surprised to learn these unsigned types are not CLS-compliant; the docs even recommend not using them if you can avoid it. That plus consistency with the .NET Framework outweighs the benefit of the runtime restricting negative values from getting assigned.

Furthermore, even using a short might be overkill on modern processors designed for 32-bit pointers, arithmetic, etc. If shorts get promoted to ints implicitly, it makes sense to use ints explicitly for standalone properties like this; trying to save a few bytes of storage could backfire if it gets allocated later anyway.




Updating bound WPF control without tabbing out

Fri, 28 Aug 2015 22:21:00 GMT

I've had an opportunity to do WPF development professionally (as opposed to hobby-tier) for the first time over the last year or so, & while the improvements over WinForms are staggering (and as a lifelong web developer, also familiar), there's plenty that's not intuitive.

One thing that bears a little explanation is this scenario, which I expect would be fairly common.

Scenario

I have a data entry form following the MVP pattern (that's Model-View-Presenter, very similar to MVC in the web world), with text boxes & other controls bound to a C# object. It has a toolbar with a Save button & other buttons on it. The user enters some text, clicks Save, & it's sent to the server to be saved. That's it.

Problem

When I enter data into the text boxes, then click Save, it was ignoring my changes to the last text box I typed into. If I tab out of that text box, then clicked Save, it saved it fine, but what user is going to think to do that? My application should work the way the user would expect; entering data into a form & saving it shouldn't require training specific to one application.

Fortunately, I'm not the only one that had this problem. Like in the question, I knew I could solve this by adding UpdateSourceTrigger=PropertyChanged to my text box's binding, but I didn't want to do that--that didn't play well with the event handler with some fairly complex validation logic I had for one of the text boxes, nor the formatting for another. I didn't want to choose between text formatting & validation working correctly, and save working intuitively.

The accepted answer states this happens because the Save button doesn't take focus away from the text box, but only when the Save button is in a toolbar. When I took it out of the toolbar, it received focus & everything worked great. But the toolbar, and thus the button within it, has a different FocusManager.FocusScope, so the last text box retains focus even though the Save button gets focused as well. I considered changing this behavior, but again, if this is the standard users are used to for WPF apps, I can understand the benefits & wouldn't want to mess with that streamlined experience.

Solution

The answer lists several ways to fix the behavior. The one I chose, which felt the least like a hack & worked for this user control, was to manually force the update. When the save button is clicked, I call Keyboard.FocusedElement as TextBox, then TextBox.GetBindingExpression(TextBox.TextProperty).UpdateSource() (with all the requisite null checks, of course), then retrieve the data from the bound object to send to the server. It worked like a charm, and was easy to centralize in my base control class, though at first glance it looks like it'd be specific to each type of control you want to have this behavior--TextBox, ComboBox, etc. But in my application, there would only be a handful anyway, so that's just a different code path to get the BindingExpression for each--three lines of code for each control type.




MVC's Html.DropDownList and "There is no ViewData item of type 'IEnumerable' that has the key '...'

Fri, 26 Oct 2012 16:30:00 GMT

ASP.NET MVC's HtmlHelper extension methods take out a lot of the HTML-by-hand drudgery to which MVC re-introduced us former WebForms programmers. Another thing to which MVC re-introduced us is poor documentation, after the excellent documentation for most of the rest of ASP.NET and the .NET Framework which I now realize I'd taken for granted.

I'd come to regard using HtmlHelper methods instead of writing HTML by hand as a best practice. When I upgraded a project from MVC 3 to MVC 4, several hidden fields with boolean values broke, because MVC 3 called ToString() on those values implicitly, and MVC 4 threw an exception until you called ToString() explicitly. Fields that used HtmlHelper weren't affected. I then went through dozens of views and manually replaced hidden inputs that had been coded by hand with Html.Hidden calls.

So for a dropdown list I was rendering on the initial page as empty, then populating via JavaScript after an AJAX call, I tried to use a HtmlHelper method:

@Html.DropDownList("myDropdown")

which threw an exception:

System.InvalidOperationException: There is no ViewData item of type 'IEnumerable' that has the key 'myDropdown'.

That's funny--I made no indication I wanted to use ViewData. Why was it looking there? Just render an empty select list for me. When I populated the list with items, it worked, but I didn't want to do that:

@Html.DropDownList("myDropdown", new List() { new SelectListItem() { Text = "", Value = "" } })

I removed this dummy item in JavaScript after the AJAX call, so this worked fine, but I shouldn't have to give it a list with a dummy item when what I really want is an empty select.

A bit of research with JetBrains dotPeek (helpfully recommended by Scott Hanselman) revealed the problem. Html.DropDownList requires some sort of data to render or it throws an error. The documentation hints at this but doesn't make it very clear. Behind the scenes, it checks if you've provided the DropDownList method any data. If you haven't, it looks in ViewData. If it's not there, you get the exception above.

In my case, the helper wasn't doing much for me anyway, so I reverted to writing the HTML by hand (I ain't scared), and amended my best practice: When an HTML control has an associated HtmlHelper method and you're populating that control with data on the initial view, use the HtmlHelper method instead of writing by hand.




How to create a new WCF/MVC/jQuery application from scratch

Sat, 22 Sep 2012 05:18:00 GMT

As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this:       Oracle 64-bit assembly throws BadImageFormatException when running unit tests

Fri, 21 Sep 2012 16:04:00 GMT

We recently upgraded to the 64-bit Oracle client. Since then, Visual Studio 2010 unit tests that hit the database (I know, unit tests shouldn't hit the database--they're not perfect) all fail with this error message:

Test method MyProject.Test.SomeTest threw exception:
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=4.112.3.0, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format.

I resolved this by changing the test settings to run tests in 64-bit. From the Test menu, go to Edit Test Settings, and pick your settings file. Go to Hosts, and change the "Run tests in 32 bit or 64 bit process" dropdown to "Run tests in 64 bit process on 64 bit machine". Now your tests should run.

This fix makes me a little nervous. Visual Studio 2010 and earlier seem to change that file for no apparent reason, add more settings files, etc. If you're not paying attention, you could have TestSettings1.testsettings through TestSettings99.testsettings sitting there and never notice the difference. So it's worth making a note of how to change it in case you have to redo it, and being vigilant about files VS tries to add.

I'm not entirely clear on why this was even a problem. Isn't that the point of an MSIL assembly, that it's not specific to the hardware it runs on? An IL disassembler can open the Oracle.DataAccess.dll in question, and in its Runtime property, I see the value "v4.0.30319 / x64". So I guess the assembly was specifically build to target 64-bit platforms only, possibly due to a 64-bit-specific difference in the external Oracle client upon which it depends. Most other assemblies, especially in the .NET Framework, list "msil", and a couple list "x86". So I guess this is another entry in the long list of ways Oracle refuses to play nice with Windows and .NET.

If this doesn't solve your problem, you can read others' research into this error, and where to change the same test setting in Visual Studio 2012.




MVC 4 and the App_Start folder

Fri, 07 Sep 2012 21:04:00 GMT

I've been delving into ASP.NET MVC 4 a little since its release last month. One thing I was chomping at the bit to explore was its bundling and minification functionality, for which I'd previously used Cassette, and been fairly happy with it. MVC 4's functionality seems very similar to Cassette's; the latter's CassetteConfiguration class matches the former's BundleConfig class, specified in a new directory called App_Start.

At first glance, this seems like another special ASP.NET folder, like App_Data, App_GlobalResources, App_LocalResources, and App_Browsers. But Visual Studio 2010's lack of knowledge about it (no Solution Explorer option to add the folder, nor a fancy icon for it) made me suspicious. I found the MVC 4 project template has five classes there--AuthConfig, BundleConfig, FilterConfig, RouteConfig, and WebApiConfig. Each of these is called explicitly in Global.asax's Application_Start method. Why create separate classes, each with a single static method? Maybe they anticipate a lot more code being added there for large applications, but for small ones, it seems like overkill. (And they seem hastily implemented--some declared as static and some not, in the base namespace instead of an App_Start/AppStart one.) Even for a large application I work on with a substantial amount of code in Global.asax.cs, a RouteConfig might be warranted, but the other classes would remain tiny.

More importantly, it appears App_Start has no special magic like the other folders--it's just convention. I found it first described in the MVC 3 timeframe by Microsoft architect David Ebbo, for the benefit of NuGet and WebActivator; apparently some packages will add their own classes to that directory as well. One of the first appears to be Ninject, as most mentions of that folder mention it, and there's not much information elsewhere about this new folder.




Unity throws SynchronizationLockException while debugging

Fri, 13 Apr 2012 23:43:00 GMT

I've found Unity to be a great resource for writing unit-testable code, and tests targeting it. Sadly, not all those unit tests work perfectly the first time (TDD notwithstanding), and sometimes it's not even immediately apparent why they're failing. So I use Visual Studio's debugger. I then see SynchronizationLockExceptions thrown by Unity calls, when I never did while running the code without debugging. I hit F5 to continue past these distractions, the line that had the exception appears to have completed normally, and I continue on to what I was trying to debug in the first place.

In settings where Unity isn't used extensively, this is just one amongst a handful of annoyances in a tool (Visual Studio) that overall makes my work life much, much easier and more enjoyable. But in larger projects, it can be maddening. Finally it bugged me enough where it was worth researching it.

Amongst the first and most helpful Google results was, of course, at Stack Overflow. The first couple answers were extensive but seemed a bit more involved than I could pull off at this stage in the product's lifecycle. A bit more digging showed that the Microsoft team knows about this bug but hasn't prioritized it into any released build yet. SO users jaster and alex-g proposed workarounds that relieved my pain--just go to Debug|Exceptions..., find the SynchronizationLockException, and uncheck it. As others warned, this will skip over SynchronizationLockExceptions in your code that you want to catch, but that wasn't a concern for me in this case. Thanks, guys; I've used that dialog before, but it's been so long I'd forgotten about it.

Now if I could just do the same for Microsoft.CSharp.RuntimeBinder.RuntimeBinderException... Until then, F5 it is.




MVC's IgnoreRoute syntax

Thu, 11 Nov 2010 22:15:00 GMT

I've had an excuse to mess around with custom route ignoring code in ASP.NET MVC, and am surprised how poorly the IgnoreRoute extension method on RouteCollection (technically RouteCollectionExtensions, but also RouteCollection.Add, and RouteCollection.Ignore which was added in .NET 4) is documented, both in the official docs by Microsoft, and various bloggers and forum participants who have been using it, some for years.

We all know these work. The first is in Microsoft code; it and the second are about the only examples out there:

routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.Ignore("{*allaspx}", new {allaspx=@".*\.aspx(/.*)?"});

I understand the first ignores .axd requests regardless of what comes after the .axd part, and the second uses a regex custom constraint for more control over what's blocked. But why is pathInfo a special name that we don't have to define? What other special names could we use without defining? What does .* mean in a regex, if that really is a regex? . is exactly one instance, and * is zero or more instances, so putting them together doesn't make sense to me.

Phil Haack provides this equally syntactically confusing example to prevent favicon.ico requests from going through routing:

routes.IgnoreRoute("{*favicon}", new {favicon=@"(.*/)?favicon.ico(/.*)?"});

I also tried these in my code:

routes.IgnoreRoute("myroute");
routes.IgnoreRoute("myroute/{*pathInfo}");

The first with URL's that ended in /myroute, and /myroute/whatever, but not /myroute/whatever?stuff=23. The second blocked all three of those, but not /somethingelse?stuff=myroute. Why does this work without putting the constant "myroute" in {} like the resource.axd example above? Is resource really a constant in that example, or just a placeholder for which we could have used any name? Do string constants need curly brace delimiters in some cases and not others? An example I found on Steve Smith's blog and on others shows the same thing:

routes.IgnoreRoute("{Content}/{*pathInfo}");

This prevents any requests to the standard Content folder from going through routing. Why the curly braces?

Just to keep things interesting, when I tried to type my IgnoreRoute call above, I accidentally left out the slash:

routes.IgnoreRoute("myroute{*pathInfo}");

which threw "System.ArgumentException: A path segment that contains more than one section, such as a literal section or a parameter, cannot contain a catch-all parameter." OK, so no slash means more than one section, and including a slash means it's only one section?

Has anyone else had more luck using this method, or found any way to go about it other than trial and error?



SQL Server 2005/2008's TRY/CATCH and constraint error handling

Mon, 04 Oct 2010 19:05:00 GMT

I was thrilled that T-SQL finally got the TRY/CATCH construct that many object-oriented languages have had for ages. I had been writing error handling code like this:


BEGIN TRANSACTION TransactionName

...

-- Core of the script - 2 lines of error handling for every line of DDL code
ALTER TABLE dbo.MyChildTable DROP CONSTRAINT FK_MyChildTable_MyParentTableID
IF (@@ERROR <> 0)
    GOTO RollbackAndQuit

...

COMMIT TRANSACTION TransactionName
GOTO EndScript

-- Centralized error handling for the whole script
RollbackAndQuit:
    ROLLBACK TRANSACTION TransactionName
    RAISERROR('Error doing stuff on table MyChildTable.', 16, 1)

EndScript:

GO


...which gets pretty ugly when you have a script that does 5-10 or more such operations and it has to check for an error after every one. With TRY/CATCH, the above becomes:


BEGIN TRANSACTION TransactionName;

BEGIN TRY

...

-- Core of the script - no additional error handling code per line of DDL code
ALTER TABLE dbo.MyChildTable DROP CONSTRAINT FK_MyChildTable_MyParentTableID;

...

COMMIT TRANSACTION TransactionName;

END TRY

-- Centralized error handling for the whole script
BEGIN CATCH
    ROLLBACK TRANSACTION TransactionName;

    DECLARE @ErrorMessage NVARCHAR(4000) = 'Error creating table dbo.MyChildTable. Original error, line [' + CONVERT(VARCHAR(5), ERROR_LINE()) + ']: ' + ERROR_MESSAGE();
    DECLARE @ErrorSeverity INT = ERROR_SEVERITY();
    DECLARE @ErrorState INT = CASE ERROR_STATE() WHEN 0 THEN 1 ELSE ERROR_STATE() END;
    RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState)
END CATCH;

GO

Much cleaner in the body of the script; we have to do more work in the CATCH (wouldn't it be nice if T-SQL had a THROW statement like C# to rethrow the exact same error that was caught?), but the core of the script that makes DDL changes is cleaner and more readable. The only serious downside I've found so far is when dropping a constraint, and you get a message like this without TRY/CATCH:


Msg 3728, Level 16, State 1, Line 1
'FK_MyChildTable_MyParentTableID' is not a constraint.
Msg 3727, Level 16, State 0, Line 1
Could not drop constraint. See previous errors.

(In this case I'd check for the constraint before trying to drop it; this is just for illustration. One I've seen more often is trying to drop a primary key and recreate it on a parent table when there are foreign keys on child tables that reference the parent.)

TRY/CATCH shortens the above error to only "Could not drop constraint. See previous errors." with no previous errors shown. Now, some research will reveal why it couldn't drop the constraint, but TRY/CATCH is supposed to make error handling easier and more straightforward, not more obscure. The "See previous errors" line has always struck me as a lazy error message--I bet there's a story amongst seasoned SQL Server developers at Microsoft as to why this throws two error messages instead of one--so I imagine the real problem is in that dual error message more than the TRY/CATCH construct itself as it's implemented in T-SQL.

If anyone has a slick way to get that initial Msg 3728 part of the error, I'm all ears; I've seen several folks ask this question and not one answer yet.




Windows Workflow Foundation (WF) and things I wish were more intuitive

Wed, 26 May 2010 21:29:00 GMT

I've started using Windows Workflow Foundation, and so far ran into a few things that aren't incredibly obvious. Microsoft did a good job of providing a ton of samples, which is handy because you need them to get anywhere with WF. The docs are thin, so I've been bouncing between samples and downloadable labs to figure out how to implement various activities in a workflow. Code separation or not? You can create a workflow and activity in Visual Studio with or without code separation, i.e. just a .cs "Component" style object with a Designer.cs file, or a .xoml XML markup file with code behind (beside?) it. Absence any obvious advantage to one or the other, I used code separation for workflows and any complex custom activities, and without code separation for custom activities that just inherit from the Activity class and thus don't have anything special in the designer. So far, so good. Workflow Activity Library project type - What's the point of this separate project type? So far I don't see much advantage to keeping your custom activities in a separate project. I prefer to have as few projects as needed (and no fewer). The Designer's Toolbox window seems to find your custom activities just fine no matter where they are, and the debugging experience doesn't seem to be any different. Designer Properties - This is about the designer, and not specific to WF, but nevertheless something that's hindered me a lot more in WF than in Windows Forms or elsewhere. The Properties window does a good job of showing you property values when you hover the mouse over the values. But they don't do the same to find out what a control's type is. So maybe if I named all my activities "x1" and "x2" instead of helpful self-documenting names like "listenForStatusUpdate", then I could easily see enough of the type to determine what it is, but any names longer than those and all I get of the type is "System.Workflow.Act" or "System.Workflow.Compone". Even hitting the dropdown doesn't expand any wider, like the debugger quick watch "smart tag" popups do when you scroll through members. The only way I've found around this in VS 2008 is to widen the Properties dialog, losing precious designer real estate, then shrink it back down when you're done to see what you were doing. Really? WF Designer - This is about the designer, and I believe is specific to WF. I should be able to edit the XML in a .xoml file, or drag and drop using the designer. With WPF (at least in VS 2010 Ultimate), these are side by side, and changes to one instantly update the other. With WF, I have to right-click on the .xoml file, choose Open With, and pick XML Editor to edit the text. It looks like this is one way where WF didn't get the same attention WPF got during .NET Fx 3.0 development. Service - In the WF world, this is simply a class that talks to the workflow about things outside the workflow, not to be confused with how the term "service" is used in every other context I've seen in the Windows and .NET world, i.e. an executable that waits for events or requests from a client and services them (Windows service, web service, WCF service, etc.). ListenActivity - Such a great concept, yet so unintuitive. It seems you need at least two branches (EventDrivenActivity instances), one for your positive condition and one for a timeout. The positive condition has a HandleExternalEventActivity, and the timeout has a DelayActivity followed by however you want to handle the delay, e.g. a ThrowActivity. The timeout is simple enough; wiring up the HandleExternalEventActivity is where things get fun. You need to create a service (see above), and an interface for that service (this seems more complex than should be necessary--why not have activities just wire to a service dir[...]



Migrating from VS 2005 to VS 2008

Wed, 02 Dec 2009 17:03:00 GMT

I recently helped migrate a ton of code from Visual Studio 2005 to 2008, and .NET 2.0 to 3.5. Most of it went very smoothly; it touches every .sln, .csproj, and .Designer.cs file, and puts a bunch of junk in Web.Configs, but rarely encountered errors. One thing I didn't expect was that even for a project running in VS 2008 but targeting .NET Framework 2.0, it will still use the v3.5 C# compiler. As such, it does behave a bit differently than the 2.0 compiler, even when targeting the 2.0 Framework. One piece of code used an internal custom EventArgs class, that was consumed via a public delegate. This code compiled fine using the 2.0 C# compiler, but the 3.5 compiler threw this error: error CS0059: Inconsistent accessibility: parameter type 'MyApp.Namespace.MyEventArgs' is less accessible than delegate 'MyApp.Namespace.MyEventHandler' It's a goofy situation, the error makes perfect sense, and it was easy to correct (I made both internal), but I expected VS 2008 would use the compiler to match whatever the target .NET Framework version was. I wouldn't have expected any compilation errors it didn't have before conversion, not until I changed the targeted Framework version. Another funny error happened around code analysis. Code analysis ran fine in VS 2005, but in VS 2008, it threw this error (compilation error, not a code analysis warning): Running Code Analysis...C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Static Analysis Tools\FxCop\FxCopCmd.exe /outputCulture:1033 /out:"bin\Debug\MyApp.Namespace.MyProject.dll.CodeAnalysisLog.xml" /file:"bin\Debug\MyApp.Namespace.MyProject.dll" /directory:"C:\MyStuff\MyApp.Namespace.MyProject\bin\Debug" /directory:"c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727" /directory:"..\..\..\Lib" /rule:"C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Static Analysis Tools\FxCop\Rules" /ruleid:-Microsoft.Design#CA1012 /ruleid:-Microsoft.Design#CA2210 ... /searchgac /ignoreinvalidtargets /forceoutput /successfile /ignoregeneratedcode /saveMessagesToReport:Active /targetframeworkversion:v2.0 /timeout:120 MSBUILD : error : Invalid settings passed to CodeAnalysis task. See output window for details.Code Analysis Complete -- 1 error(s), 0 warning(s)Done building project "MyApp.Namespace.MyProject.csproj" -- FAILED. I especially like the "See output window for details," which 1. screams of a Visual Studio hack as it is, and 2. doesn't actually give me any more details in this particular case, though Google tells me that other people do get more information in the output window. I noticed Debug and Release modes both had code analysis enabled (I think switching Framework versions swapped them on me and I accidentally enabled it in Release mode), and Release mode wasn't erroring out but Debug was. I looked at the difference in the csproj file, and in the FxCopCmd.exe calls, and the key seemed to be the /ruleid parameters (bolded), of which there were a ton in Debug but not Release. Presumably this is because I disabled some of the rules in the project properties, so I tried enabling them all. The number of /ruleid params went down, but it still gave the same error. The Code Analysis tab in project properties looked the same between Debug and Release. Finally I unloaded the project, edited the csproj file (I'm glad I found out how to do this within VS, instead of exiting VS and editing it in Notepad), and removed this line, which was present in the Debug PropertyGroup element but not the Release one: -Microsoft.Design#CA1012;-Microsoft.Design#CA2210... Code analysis then ran successfully. I imagine this solution isn't ideal for everyone, if you want to enable/disable particular rules, and it's not ideal for us long-term, bu[...]



Windows 7 rocks!

Wed, 02 Dec 2009 00:19:00 GMT

I bought my current PC almost three years ago. I've had my own PC for 15 years or so, and, aside from my first desktop and a laptop I only use when traveling, that was the only time I've bought a whole PC, rather than buying parts and assembling my own (a Frankenputer as former coworkers affectionately referred to them). Like many of my colleagues who work in Microsoft technologies, I looked into buying a Dell, and they had a fine deal, and more importantly, they had finally started selling AMD processors, which I can proudly say without qualification is the only CPU in any computer I've owned. I configured one with a dual core, 64-bit processor, and all sorts of new technologies I'd never heard of but were (and appear to still be) the latest and the greatest. ("What's SATA? We use PCI for video again?" I asked myself.) Windows Vista had RTM'd and was weeks from retail availability, and my PC included a deal to upgrade once Microsoft let Dell send upgrade discs. My PC had a rather small (160 GB, I think) hard drive, which I intended to replace with a 500 GB or so once Vista came out, installing it there fresh instead of trying to upgrade Windows--Windows upgrades have never worked so well for me, whereas fresh installs are fine. Then I heard all the complaining about Vista, and decided to hold off. I ran low on space before Vista SP1 came out, so got that second hard drive anyway and kept my photos there. From then on, Windows XP Professional worked "well enough" so I stuck with it. Things got bad a couple months before Windows 7 came out. First, Norton AntiVirus misbehaved. To be fair, the program was about 6-7 years old; I kept it around because it seemed to work well enough, I got it free as a student, and virus definition upgrades were free. Then I noticed the dates on the definitions went from a few days ago, to the middle of 1999. It still changed every week, and still found upgrades, so I'm guessing it was just a bug in how it displayed the definitions date, but still made me nervous, as did the prospect of uninstalling old and installing new virus scanners. Roxio was the next to act up. After Microsoft paved the way with Windows Update, suddenly every software manufacturer was convinced their product was just as important to check at least every week for updates, and the updates, just as urgent. Eventually I got Apple to quit bugging me to install Bonjour and Safari, but I couldn't get Roxio (or, perhaps more accurately, InstallShield Update Manager which came with Roxio) to quit prompting me to check for updates on the 0 products I had told it to check. I googled and finally found a tool I could use to uninstall that piece of it, without uninstalling Roxio, on InstallShield's support site. That was a mistake. It stopped prompting me, but added about 3 minutes from when Windows comes up after I start my PC, until my computer was usable, and in the meantime, Norton was disabled, Windows Firewall was disabled, and programs wouldn't start. Add to this a nagging problem where my SD/CompactFlash card reader thinks it's USB 1.x intermittently, and the ugly way Windows Search was grafted onto Windows XP, and the fact that XP (and earlier versions of Windows--not sure about 7 yet) just slows down after a couple years, and I knew it was time to upgrade once Windows 7 came out. The more I learned about Windows 7 (and, to be fair, much of it was new in Vista and largely unchanged in 7, but I'd barely ever used Vista), the more I liked it. The way search worked much faster, more efficiently, and was integrated into everything, even the Start Menu (no more reorganizing each program's 20 or so icons so I could find the ones I actually wanted! no more sorting alphabetically every time I install a new program!)... An overhauled [...]



TFS deleted files still show up in Source Control Explorer

Wed, 02 Dec 2009 00:02:00 GMT

One problem I've had in Team Foundation Server since Visual Studio 2005 and still in VS 2008 is when items are deleted by someone else, they still show up in Source Control Explorer, with a gray folder with a red X icon, even with "Show deleted items in the Source Control Explorer" unchecked in VS's Options dialog. Sometimes getting latest of the parent clears things up, but other times it doesn't, even with Get Specific Version with both Overwrite boxes checked to force a get. In this case, the only option I've found is to delete my workspace and recreate it, which means checking in everything beforehand, and getting latest of my working branches afterwards. It's a pain, but as specified here and approved by a Microsoft employee, that may be your only option until it's fixed--fingers crossed for VS 2010. (We won't get into the other things for which my fingers have been crossed since I first used TFS in 2005, things that VSS did just fine, such as rollback, check in changes and keep checked out, and search.)

Anyone have any better solutions? Deleting and recreating your workspace seems a bit drastic.




VS 2008 and .NET 3.5 Beta 2 released, with Go Live

Thu, 26 Jul 2007 21:18:00 GMT

It's official! In one of the first of a few dozen posts you'll read about it, Scott Guthrie announces Visual Studio 2008 and the .NET Framework 3.5 Beta 2 have been released.




Microsoft Sandcastle

Tue, 03 Jul 2007 16:27:00 GMT

In working with my company's offshore developers, I was tasked with providing them documentation on a set of class libraries we use in our applications. In the .NET 1.0/1.1 time frame, we used NDoc, which, sadly, passed away last year, to turn the XML comments output by the C# compiler into CHM help files. After a bit of googling and a false start, I discovered Sandcastle, which Microsoft uses to build the .NET Framework documentation itself. I also discovered from the Sandcastle blog that it takes a whole mess of manual steps to use, which appeared daunting at first glance, and, being a programmer, I was looking for an easier (lazier) way.

From the official Sandcastle download page, I found the Sandcastle Wiki, and from there, an NDoc-like, Visual Studio-like GUI for it creatively titled Sandcastle Help File Builder. Setting up a SHFB project and getting the documentation to compile, and then to look/behave almost exactly like I envisioned, was simple at this point.

Overall, I'm pretty impressed how easy it was to discover this and find resources to use it--a lot easier than it used to be to fill a component/tool need that Microsoft claims to address (some of the data access pieces in the Visual InterDev 6.0 time frame come to mind). Really, the hardest part was getting the Google terms right!

Again, you can download Sandcastle here. The latest version is the June 2007 CTP (Community Technology Preview, which you probably already know means it's pre-release), released a couple weeks ago.




HTTP modules - subdirectories and private variables

Fri, 02 Mar 2007 21:29:00 GMT

I recently finished (for now--there's always more to do) one of the more complex HTTP modules I've worked on. I have an application first written in the ASP.NET 1.0 beta 2 time frame that's since been upgraded to 1.0, 1.1, and now 2.0. It had a lot of custom authentication and error handling code in global.asax, and for general architecture and server management purposes, I wanted to move this code into separate HTTP modules. I ran into a couple gotchas I wanted to document.Lesson 1: You can't disable an HTTP module for a subdirectory. I wanted to remove the HTTP module for one subdirectory using the configuration element, and while it let me put it in my web.config fine and never threw an error a la "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level.  This error can be caused by a virtual directory not being configured as an application in IIS.", when I ran it, it went into the module's code as if that config section wasn't there.Scott Guthrie explained to me why: "HttpModules are application specific. When an application starts up, a number of HttpApplication's get configured (one for each logical thread that will execute in the application), and HttpModules are created and assigned to them. That is why you need to configure them at the application root (or higher) level in config. This enables much better pooling of resources." He pointed out that other HTTP modules in ASP.NET handle this with custom configuration sections, which is more work than I was looking at doing for this release.An exception to this rule is if that subdirectory itself is configured as an IIS application, but in many cases (including mine), this is more trouble than it's worth, limiting what user controls it can see, requiring its own bin directory... and if you're going to do all that, it's probably going to need its own web.config file anyway. Lesson 2: You shouldn't use private member variables in an HTTP module. I've been an ASP.NET programmer long enough that I thought I could figure out whether member variables were safe or not. I could argue for it either way, and I wasn't feeling ambitious enough to wade through all that code in Reflector. I finally found official documentation for the HttpApplication class that said, "One instance of the HttpApplication class is used to process many requests in its lifetime; however, it can process only one request at a time. Thus, member variables can be used to store per-request data." So it made sense that if an HTTP module is wired to a particular HttpApplication instance, the same rule would apply, and member variables would be safe in HTTP modules as well.Again,  Scott helped me out by advising me against member variables: "If you have an async operation occur during the request, I believe ASP.NET might switch the HttpModule to another thread to execute. That is my worry with storing local variables. It might work in your dev environment, but generate different results under high-load on a server." And no one needs any more "works great in dev and QA but not production" sorts of issues. He advised storing such things in HttpContext.Items instead, but in my case, the data was so cheap to calculate and consumed rarely enough that I decided to just have a method to calculate it every time. Interestingly, by switching from global.asax (where member variables were fine) to an HTTP module, I took a step back in this department, but overall it was the right move and the way I would have written the app from [...]



ASPInsiders Summit 2006 - C# 3.0 and LINQ

Tue, 16 Jan 2007 21:19:00 GMT

C# 3.0 (not to be confused with the confusingly-named .NET Framework 3.0, which includes C# 2.0, not C# 3.0) was the most exciting thing discussed at the ASPInsiders Summit. When I first learned a bit about LINQ at the 2005 summit, I didn't really get what was so great about taking some mangled SQL syntax and duct-taping it onto the language. Given a few hours for Anders Hejlsberg, the lead architect of C#, to explain LINQ, how it came to be, how it works behind the scenes, and why it's a Good Thing, I changed my mind. He and Scott Guthrie sold me on LINQ (at least, as much as I could be until it's released and I can play with it first-hand). Most of the big changes in C# 3.0 were driven by LINQ, which is why I'm talking about them in conjunction. But this doesn't mean they're LINQ-specific; I can think of a ton of cases independent of LINQ where this stuff will be useful. It reminds me of when "XML web services" was repeated in every lecture, article, blog post, knowledge base article, and whitepaper from Microsoft that talked about the .NET Framework when it first came out, as if that was the main thing anyone would use the framework for. Though I've done a ton of .NET programming, I think maybe 1% of has had anything to do with web services. Thankfully, only few seem to have been fooled by that and adoption took off anyway.But back to C# 3.0's new features. These are explained in more depth and with more examples on the LINQ Project site, so check out the overview there for more information--no need for me to duplicate all that.Local variable type inference (or, "No, it's OK, the var keyword isn't evil anymore"). Former ASP programmers like me instinctively shudder when we see the keyword "var", and recoil a bit when we hear it's being introduced to C#. But this has nothing to do with late binding or strong typing--this is just something to allow programmers to be just a little lazier. The basic idea is if you have a long type like Dictionary you're newing up on the right, you don't have to type all that on the left since the compiler already knows what type it is.int i = 5; Dictionary orders = new Dictionary();is 100% equivalent to:var i = 5; var orders = new Dictionary();The variable orders still has the same type of Dictionary it does in the first example. It'll still behave the same as it did, you'll still get the same Visual Studio IntelliSense you did, and the IL will be the same. Var is not variant here--it's "I'm lazy so let the compiler figure it out."Extension methods. This allows you to "add" your own methods to types you can't otherwise change yourself, e.g. in the .NET Framework or third-party libraries. So if you've always wanted a string.FooBar() method, just write it as an extension method, include its namespace, and you've got your string.FooBar() method. Everyone's got a pet peeve this can address--some method they wish the framework provided, a string or numeric operation. Now it's yours, whatever you want, however you want it to work.Lambda expressions. This is another feature for laziness/concision, making it easier to write a function inline than you could with C# 2.0's anonymous methods.List customers = GetCustomerList(); List locals = customers.FindAll(c => c.State == "KY"); reads as "Find all c such that c.State equals Kentucky". The compiler infers that the return type o[...]



ASPInsiders Summit 2006 - IIS 7

Fri, 29 Dec 2006 18:15:00 GMT

Earlier this month was the ASPInsiders Summit at Microsoft. As finishing the QA cycle for, deploying, and supporting a project at work has kept me busy, followed immediately by the holidays, I haven't had much chance to blog about it until now. This is partly a good thing, since it means others more punctual than me have blogged about it first and saved me some trouble. Thanks, Steve!As usual, they talked a bit about stuff that's recently been released, such as Visual Studio 2005 Team Edition for Database Professionals and the now-released action-packed VS 2005 SP1, where Web Application Projects becomes a first-class citizen in the Visual Studio world as they should have been since day one.One of the cooler things from the first day, beginning in Scott Guthrie's opening talk and continuing into its own sessions, was IIS 7. I'm more focused on writing C# code for the web and SQL Server database maintenance than IIS administration lately, but I've been responsible for varying degrees of it since IIS 4. I remember when Gartner published their scathing recommendation that companies ditch IIS until it's rewritten, which as I understand happened soon after with version 6. Still, Scott said IIS 7 is the biggest release of IIS in years (though he may have meant the three and a half years since IIS 6 came out!).There's a new system.webServer configuration file section for IIS settings, like the system.web settings used for ASP.NET applications today. This puts another nail in the coffin of the horrible, brittle binary metabase from IIS 5 and earlier, moving things like directory browsing and default document settings into a config file, allowing you to set it on the machine level and override it during xcopy deployment per application if you want. And you can make changes programmatically.IIS 7 also takes another step towards a more modular design, going from about 6 components today to about 42. So you can take things like ISAPI filters, CGI, ASP, ASP.NET, and Windows authentication and enable/disable them as needed for your particular server. Making up for lost time on the security front, this allows administrators to greatly reduce IIS's attack surface. Plus, if you don't like the way IIS does something, you can write your own module in ASP.NET and override it. Scott's example showed the directory list module replaced by one he'd written, that instead of giving you a boring file listing, gives you a thumbnail-based image gallery instead. Cool!(As a budding photographer, I was impressed by the pictures he used to demonstrate this--close-ups of lions he'd taken with a monster 400mm lens during a recent vacation to South Africa.)ASP.NET gets one step closer to (if not reaching entirely) full-class citizenship. As Mike Volodarsky explains, whereas requests before had to go through the IIS framework before reaching the ASP.NET framework, the two duplicate layers are now integrated into one. An example of why the old way sucked is how many folks had applications set up to be anonymous in IIS, so it would pass through authentication to ASP.NET which would then authenticate the request. An example of why the new way is cool is that since all the authentication happens at the same level--Basic, Anonymous, Windows, and ASP.NET's Forms Authentication, to name a few--you can now have ASP and PHP apps, as well as static files like JPG and CSS, authenticating via ASP.NET Forms Authentication. Cool. We've got some sites that still have a lot of ASP pages to migrate over, so this could help out a lot.Oh, and they're getting ri[...]



Windows service setup projects - Unable to build custom action error

Mon, 14 Aug 2006 18:22:00 GMT

I ran into another under-documented snag upgrading my solution to Visual Studio 2005. It has a Windows service used for behind-the-scenes maintenance sorts of tasks that run on a regular basis, and a corresponding setup project. Now, it's also got setup projects for command-line apps and web apps, and all of those upgraded fairly well--it got confused on the output, so I had to delete and re-add the appropriate Primary Output to each project, but after that, all the setup projects built fine, except the Windows service one.

It gave this error in the Output window when I tried to build it.


Building file 'C:\whatever\MyProjectSetup.msi'...
ERROR: Unable to build custom action named 'Primary output from MyProject (Release .NET)' because it references an object that has been removed from the project.
Build process cancelled

In the Error List, when I double-clicked on this error, it took me to the Custom Actions window for this project. As documented in this Microsoft Knowledge Base article, the project had the Primary Output listed as a custom action for all four custom action types--Install, Commit, Rollback, and Uninstall. The Uninstall one is the only one highlighted, though, with a red wavy underline. This told me nothing more about the error.

Sometimes in cases like this, when I open the project file in Notepad, it can make it clear what's wrong. For instance, when a Crystal Reports report loses its corresponding .cs file, I can reassociate them by manually editing the project file and reopening the project. But .vdproj files are different beasts--much uglier and harder to manage in Notepad.

That KB article said it was set up correctly, and yet it wasn't building. So I tried deleting all four custom actions, and re-adding them per the directions in the article, and it built, installed, started and ran, and uninstalled just fine. (For what it's worth, Microsoft did document this process in an MSDN Library article.)

One difference I see is that all my project outputs in all my setup projects, including these custom actions, now read "(Release Any CPU)" instead of "(Release .NET)". I'm not sure what difference that makes, nor why I should care (and have to correct it manually), but as far as I can see, it's working fine now. We'll see if I get any nasty surprises when I give it to QA for testing.




Ambiguous match found

Fri, 11 Aug 2006 17:31:00 GMT

Looks like I've been away a few months. Sorry 'bout that. I've had stuff to write about but not enough time to set aside to do so, which is a little ironic; one of the main reasons I created this blog was so I could take 5 minutes and jot down stuff I find out while developing, largely in case it helps others, not so I could write my usual verbose 5000 word essays (which is what my AspAlliance site is for!). So, on that note, I wanted to share how I solved a goofy error that made no sense. I was upgrading a good-sized web application from .NET 1.1/Visual Studio 2003 to .NET 2.0/Visual Studio 2005 using Scott Guthrie's well-written instructions and the Web Application Projects add-in. Going through VS2005's automatic conversion and building it was pretty painless when I followed Scott's instructions; one wrinkle I ran into is that it forgot about the three web projects, so I had to add those manually using "Add Existing Project" after I converted the others. But it built and I went through the app and everything was fine. I didn't do the "Convert to Web Application" command on the whole project, preferring to keep a closer eye on what it changed, and touch it up after. So I started doing it folder by folder per Step 8 inScott's instructions, and it seemed to work great, until I got to one particular page and got this error. Parser ErrorDescription: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately.Parser Error Message: Ambiguous match found.Source Error:Line 1:  <%@ Control language="c#" Inherits="MyApp.Namespace.MyCoolControl" CodeBehind="MyCoolControl.ascx.cs" AutoEventWireup="false" %>Line 2:  <%@ Import Namespace="MyApp.OtherNamespace" %>Line 3: 
Source File: /MyApp/Namespace/MyCoolControl.ascx    Line: 1 So it says Line 1, Line 1 is highlighted, and yet, the only reference in line 1 is a fully-qualified one. To be safe, I pulled all my custom assemblies into the invaluable .NET Reflector and double-checked, and there was only one instance of that class name anywhere. I tried what I normally try when I get weird ASP.NET errors I shouldn't be getting, a process many of you are probably too familiar with if you've worked with ASP.NET very long. I restarted IIS, deleted and recreated the application in IIS Manager, deleted the Temporary ASP.NET Files directory for MyApp (after closing Visual Studio), restarted Visual Studio and rebuilt the project, etc. None of these helped this time. Scott's instructions mentioned this error in some cases, notably the IE WebControls, but the solution has been to specify the full namespace, not just the class name, so that didn't help me. Other Google search results were similarly unhelpful, until I found the blog of a guy named Eran Sandler who talked about his "Ambiguous match found" error and how he solved it--two protected fields with names that differed only in case, apparently confusing reflection. Sure enough, my code had the same problem--I had myRepeaterControl (name changed to protect the innocent, of course), and MyRepeaterControl in my code-behind. The latter wasn't used and was probably left in mistakenly; I removed it, it still built fine, and the control worked great. Thanks, Eran![...]