Subscribe: Alex Papadimoulis' .NET Blog
Added By: Feedage Forager Feedage Grade B rated
Language: English
business  check  code  data  database  don  exception  exceptions  net  page  script  server  sql server  sql  system  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Alex Papadimoulis' .NET Blog

Alex Papadimoulis' .NET Blog

Alex's musings about .NET and other Microsoft technologies


ScriptOnly - The Opposite of a NOSCRIPT

Thu, 21 Feb 2008 18:07:00 GMT

Despite all of the advances in client-side scripting, the wonderful JavaScript libraries like Prototype and Scriptaculous, and the ease of writing AJAXy code in ASP.NET, there’s still one aspect of modern web development that can be a complete pain in the butt: accessibility for users without JavaScript. If you’re lucky – perhaps you’re developing an Intranet application, or the like – a simple is all it takes. But other times, you need to take that extra step and make it work for those with and without JavaScript enabled. There’s a lot of ways that this can be accomplished, but one of the more popular ways is with the SCRIPT/NOSCRIPT combo... While this works fine in a lot of scenarios, it can get especially tricky when you want to put server-side controls on the SCRIPT side of things. A lot of developers resort to something like this...
Only users without JavaScript will see me.
... and of course, things quickly get much uglier once you do this in the real world. One solution that I use is a simple, custom-control called ScriptOnly. It works just like this... Only users with JavaScript will see me. JavaScript users see a LinkButton, while non-JavaScript users see a plain old submit button. What’s neat about this technique is that you can put any type of content - server-controls, html, script tags, etc - and that content will only be displayed for JavaScript users. In essense, it works like a reverse NOSCRIPT tag. Behind the scenes, ScriptOnly is a very simple control... [ParseChildren(false)] public class ScriptOnly : Control { protected override void Render(HtmlTextWriter writer) { //Render contents to a StringWriter StringWriter renderedContents = new StringWriter(); base.Render(new HtmlTextWriter(renderedContents)); //write out the contents, line by line writer.WriteLine(""); } private string jsEscapeText(string value) { if (string.IsNullOrEmpty(value)) return value; // This, too, could be optimzied to replace character // by character; but this gives you an idea of // what to escape out return value /* \ --> \\ */ .Replace("\\", "\\\\") /* ' --> \' */ .Replace("'", "\\'") /* " --> \" */ .Replace("\"", "\\\"") /* (newline) --> \n */ .Replace("\n", "\\n") /* (creturn) --> \r */ .Replace("\r", "\\r") /* string */[...]

A Workaround For VirtualPath Weirdness With Custom VirtualPathProviders

Fri, 11 Jan 2008 23:16:00 GMT

[[ Meta-blogging: as you may have noticed from the name/description change (and of course, this article) I’ve decided to shift the focus of this blog back to the “front lines” of Microsoft/.NET development technologies. All other rants and ramblings will go to Alex's Soapbox over at WTF ]]

If you've ever come across this error...

The VirtualPathProvider returned a VirtualFile object with VirtualPath set to '/global/initrode/embeddedControl.ascx' instead of the expected '//global/initrode/embeddedControl.ascx'

... then chances are you're implementing VirtualPathProvider in order to serve up embedded Page/Control resources or something fun like that. Let's just hope your not serving pages from a ZIP file. And if you have no idea what a VirtualPathProvider is, then do check out that MSDN article I linked to get an idea.

The reason behind this error is identified in Microsoft Bug #307978: ASP.NET is erroneously replacing page compilation errors with the bad virtual path error. While ensuring that your virtual-pathed page will compile is a sure-fire way to fix the error, finding the compilation errors can be a bit of pain.

Fortunately, there's a pretty easy workaround that will let you see some of the compilation errors. First, make sure that your custom VirtualPathProvider has a static method that can determine if given virtualPath is on disk or is virtualized (e.g. an embedded resource). Next, create an IHandlerFactory that inherits PageHandlerFactory, overrides the GetHandler method, and has a try/catch around a call to base.GetHandler(). In the event that an exception occurs, simply determine if the request's virtual path is "virtual" (through that static method) and, if so, rethrow the exception with only the error message. In other words,

public class MyPageHandlerFactory : PageHandlerFactory
        public override IHttpHandler GetHandler(HttpContext context, string requestType, string virtualPath, string path)
                return base.GetHandler(context, requestType, virtualPath, pathTranslated);
            catch (Exception ex)
                //TODO: ASP.NET 2.0 Bug Workaround
                // There is an error generating a stack trace for VirtualPathed files, so 
                // we have to give up our stack trace if it's a resource file

                if (EmbeddedResourceVirtualPathProvider.IsResourcePath(virtualPath))
                    throw new Exception(ex.Message);
                    throw ex;

Since we're only wrapping the GetHandler method (as opposed to the IHttpHandler's ProcessRequest method), the only errors you'll see wrapped like this are pre-ProcessRequest errors (e.g. compilation errors). And while this won't give you the full stack trace, at least you'll see something like this instead:

http://server//global/initrode/embeddedControl.ascx(5): error CS1002: ; expected

Coghead: Web Applications for Dummies by Dummies

Mon, 25 Sep 2006 22:53:00 GMT

There's been a buzz going around about a new web startup called Coghead.  Heralded by Business 2.0 as one of the "innovations that could reorder entire industries," Coghead is lead by former Red Hat executive Paul McNamara and Extricity founder Greg Olsen. El Dorado Ventures, a Venture Capitalist firm, recently invested $2.3M in the company. According to McNamara,Coghead will enable nonprogrammers to rapidly create their own custom business softwareIn other words, Coghead is the b*stard child of 4GL and Web 2.0; it's the end result of a careful mixture of myth and buzzword; and, I'm sure, it will play an important part in several upcoming The Daily WTF articles. Before we get into the details of Coghead, let's take a look back at the world of 4GL.There's a bit of ambiguity surrounding what is and isn't a 4GL (Fourth Generation Language), so I'll stick with James Martin's characterization from his 1982 book, Application Development Without Programmers. The book's title should give you a good enough understanding of the goal of a 4GL: the ability to develop complex custom business software through the use of simple-to-use tools.In the quarter-century since Application Development Without Programmers debuted, let's consider how far we've come on this goal: dBase, Clipper, FileMaker, and Access. It's a pretty far cry from what James Martin and the other 4GL dreamers had in mind. Sure, Jane in Accounting could easily use Microsoft Access to create custom software to manage her music collection, but ask her to develop the General Ledger in Access and you'll find your books in worse shape than Enron's and Tyco's combined.There's a simple, common-sense reason why custom business software will always require programmers. It's the same reason that brickwork will always require a mason and why woodwork will always require a carpenter. No matter how complex and versatile a tool is, an experienced builder is always required to create something unique.Like many other common-sense principles, the "software machine" is one that some programmers don't get. Be it with The Tool or The Customer Friendly System, these programmers believe they are so clever and so intelligent that they can program even themselves into obsolescence.Some businesses don't get it, either. But they will eventually pay the price: the "mission critical" software they developed for themselves in Microsoft Access will become their albatross, costing time, opportunity, and, eventually, lots of money for real programmers to fix their mess. And this is where we return to Coghead. You see, Coghead is merely another example of this arrogant ignorance, but this time it's web-based and enterprisey. That's right; unlike its desktop counterparts, Coghead is targeted towards big businesses, not small businesses and hobbyists: "anyone who can code a simple Excel macro should have little trouble using Coghead to create even sophisticated enterprise apps like logistics trackers, CRM programs, or project management systems.Even with a liberal application of AJAX, the fact that Coghead is web-based means that it's less functional and offers a poorer experience than its desktop equivalent. Not only that, but all data and business logic are left in the hands of a third party. That, in and of itself, is a good enough reason to avoid Coghead.the Coghead web IDETwenty years ago, if you developed a dBase application, you "owned" it and knew that so long as you could find a MS-DOS 2.0 disk, your data and business logic were safe. If you developed a Coghead application, what would happen if Coghead went out of business? What if they upgrade their system and it breaks your application? What if you forget to pay the subscription fee and they delete your application? It just isn't worth the risk.Some might argue that this negative analysis is a knee-jerk reaction to a threat. After all, Business 2.0 claims that Coghead will be a disruptor for "initially, custom software developers, but potentially almost all software-tool makers." But[...]

Stop Using Enterprise Manager! (Use DDL Instead)

Wed, 03 May 2006 20:43:00 GMT

Of all the tools that ship with SQL Server, Enterprise Manager is by far the most feature-packed and widely-used. Nearly every SQL Server developer is familiar with Enterprise Manager. They are comfortable using the wizards and GUI to do everything from creating a new table to adding a schedule job. But as a project grows to encompass more developers and environments, Enterprise Manager becomes a detriment to the development process.

Most applications exist in at least two different environments: a development environment and a production environment. Promoting changes to code from a lower level (development) to a higher level (production) is trivial. You just copy the executable code to the desired environment.

  • Click on the desired database.
  • Click on Action, New, then Table.
  • Add a column named "Shipper_Id" with a Data Type "char", give it a length of 5, and uncheck the "Allow Nulls" box.
  • In the toolbar, click on the "Set Primary Key" icon. Then you skip 22 steps.
  • In the toolbar, click on the "Manage Relationships…" button.
  • Click on the New button, and then select "Shippers" as the Foreign key table.
  • Select "Shipper_Id" on the left column and "Shipper_Id" on the right column. Skip the remaining steps.

Not only is this process tedious, but you're prone to making errors and omissions when using it. Such errors and omissions leave the higher-level and lower-level databases out of sync.

Fortunately, you can use an easier method to maintain changes between databases: Data Definition Language (DDL). The change described in the previous example can be developed in a lower-level environment and migrated to a higher-level environment with this simple script:

  Shipper_Id CHAR(5) NOT NULL 
  Shipper_Name VARCHAR(75) NOT NULL,
  Active_Indicator CHAR(1) NOT NULL 
    CONSTRAINT CK_Shippers_Indicator 
      CHECK (Active_Indicator IN ('Y','N'))
  ADD Shipper_Id CHAR(5) NULL,
  ADD CONSTRAINT FK_Orders_Shippers
    FOREIGN KEY (Shipper_Id)
    REFERENCES Shippers(Shipper_Id)

You can manage all the DDL scripts with a variety of different techniques and technologies, ranging from network drives to source control. Once a system is put in place to manage DDL scripts, you can use an automated deployment process to migrate your changes. This process is as simple as clicking the "Deploy Changes" button.

The perceived difficulty of switching changes from Enterprise Manager to DDL scripts is one of the biggest hurdles for developers. The Books Online don't help change this perception. A quick look at the syntax for the CREATE TABLE statement is enough to discourage most developers from using DDL.

Enterprise Manager helps you with this transition. Before making database changes, Enterprise Manager generates its own DDL script to run against the database. With the "Save Change Script" button, you can copy the generated DDL script to disk, instead of running it against the database.

But as with any code generator, your resulting T-SQL script is far from ideal. For example, having Enterprise Manager generate the DDL required for the change described in the example involves six different ill-formatted statements. What do you do now? You can add a bit of refactoring to the generated script, and the result looks almost identical to the example script I showed earlier. After a few more rounds of generating and refactoring, you'll want to transition straight to DDL, and never look back at tiresome database development within Enterprise Manager.

Holy Crap: I'm an Official MVP for MS Paint

Mon, 14 Nov 2005 03:47:00 GMT


There are few emails that one will receive in his lifetime that will render him completely speechless. This past weekend, I received one such email. Its subject read Congratulations on your MVP Award!

I struggle with the words to describe how elated I am to be chosen for this award. Sure, I’ve worked my butt off in microsoft.public.accessories.paint, helping both newbies and vets solve their problems. But I never expected this. For me, it’s always been about my love of the Paint, and sharing my knowledge and expertise of Paint with the world.

I don’t want to bore you with me patting my self on the back, so I’ll just use the rest of this space to share my top three tips and tricks. I’ve got plenty more, so if you ever need some help with Paint, don’t hesitate to ask this MVP!

Why are some of my edges jagged?
You’ve discovered one of the dark secrets of digital art: pixilation. Because everything in your Paint image is made of small square blocks, the only way to make a diagonal line or a curve is to arrange the pixels in “steps;” these very steps give the image that ugly, jagged appearance.


Fortunately, we can help smooth out the jagged edges with a technique called anti-aliasing. The trick is to make the jagged edge an in-between color of the two bodies of colors. For our red circle and white background, all we need is pink, applied with the spray paint can tool.


And like magic, the jagged edge is no more!

How do I do shadows?
Shadows in Paint are incredibly easy to do:
1) Draw the shape you want to draw, but instead use black
2) Draw the shape you want to draw, using the colors you really want to use, but draw it at an angle slightly away from the black shape


Look ma, a shadow!

How can I make realistic looking Hair?
This is one of the more difficult things to accomplish in Paint. But it’s certainly doable. First, you need to figure out what hair style you want to use. Once you figure that out, it’s just a matter of using the right tool.

Believe it or not, this is a simple matter of using the wonderfully handy spray can tool. Just pick the hair color, and go crazy!!!

This hairstyle is so ridiculously simple you’ll wonder why more cartoons characters aren’t bald. Simply apply the ellipse tool twice, above each ear, and you’ve got yourself a bald guy!

Side Part
When you want to make your character look neat and orderly, only the polygon tool will do. Here’s something funny: I like to part my own hair on the left, but draw it parted on the right. Funny, see, I told you!

Bed Head
Oh no, caught red handed without a comb! You can easily achieve this look with the use of the paint brush tool. Don’t go too crazy, it’s pretty easy to slip and go through an eye.

Be sure to congratulate Jason Mauss as well. He was awarded this year’s MSN Messenger MVP.

Express Agent for SQL Server Express: Jobs, Jobs, Jobs, and Mail

Thu, 10 Nov 2005 15:52:00 GMT

UPDATE: My appologies, but with the advent of relatively inexpensive commercial solutions avaiable, I've decided to suspend this project indefinitely. If I do need a solution for myself, I may take it up again. But until then, I would recommend getting a commercial version ( is one source) or using the Windows Task Manager to run batch files. UPDATE 2: I no longer "officially" recommend Vale's agent; though I've used the product for well over a year, they were completely non responsive (via phone or email) to a showstopper bug in their product (stopped working after 24 hours when a job was set to run every 5 minutes). My workaround was to have a Windows Task stop then start VAle's SQL Agent service. Also, as a commenter noted, a free version ( is out there - I have not used this, however. I was pretty excited to learn about SQL Server: Express Edition. It is a stripped-down of version of SQL Server that is free to get, free to use, and free to distribute. This is great news if you're in the business of building small- and mid-sized database applications but not in a position to fork over five grand for the full edition. A free, stripped-down version of SQL Server is nothing new; afterall, MSDE filled this niche for the previous version of SQL Server. One thing that sets SQL Server Express apart is its branding and accessiblity. Not only does Express "feel" like SQL Server, it's easy to install, use, and administer. MSDE did not have these qualities, which kept it out of the reach of many would-be database developers. The limitations imposed by SQL Server Express do not hinder most small- and mid-sized applications. A single processor and a gigabyte of RAM is enough to run most of these applications and it certainly takes a *lot* of data to fill a database up to four gigabytes. One thing that makes Express a deal-killer is the lack of SQL Agent, which runs scheduled jobs and automates backups. That's important in just about all-sizes of applications. I'm developing an application that will fill this functionality gap: Express Agent. I was hoping to have this complete before the launch of SQL Server Express, but other priorities prevented this from happening. Express Agent strives to replace and improve upon the SQL Agent that was left out. Like the SQL Agent, Express Agent runs as a service. However, Express Agent can also be "plugged in" to a hosted web-application as a HttpHandler. This allows Express Agent agent to run as background thread, running jobs and sending email as needed. The jobs are modeled in a similar fashion to the way SQL Server handles them. A job contains a number of tasks (SQL Scripts) that are run depending on whether the previous task was successful (no errors) or successful (errors). Jobs can also be scheduled on a one-time, idle, start-up, and recurring basis. The recurring schedule is handled much the same way SQL Server handles jobs as well. Express Agent also adds database-email capability to Express Edition. Though not as complex as SQL Server's implemntation, this should cover just about any emailing you'd need to do from within your stored procedures. The mail feature is used to send success/failure notifications after jobs have been run. It's difficult for me to show progress, since much of the work I've done is the "behind the scenes" stuff. I'm still working out the UI, HttpHandler, and some other issues, but so far it works great on it's own, so long as jobs are added via the stored procedures. No less, here's a few screen shots from the Jobs Manager UI ... If this app looks like it may be of interest to you, I'd appreciate your feedback. If you're interested in lending a hand with some of the remaining portions, I'd really appreciate that, too. I plan on offering this completed product for free, but most likely not open sour[...]

Gettin' Down in Detroit: The 2005 Launch Party

Wed, 09 Nov 2005 18:46:00 GMT

I saw that Jason Mauss wrote about his experience at the San Fransisco 2005 Launch Party, so I thought I'd share my experience at the Detroit venue. Because it wasn't the "real" Launch Party, we didn't have anything fancy like a speech from Steve Balmer, songs performed by AC/DC, or appearances by the guys from Orange County Choppers. But it was still a good time. Please bare with my lack of actual photographs, as I did not have the foresight to bring a camera. Although the "doors" to the event opened at 7:30AM, the insatiable desire for inexpensive liquor required a stop at the duty-free shop first. I loaded up on Courvoisier, Chambord Royale, and many other fine spirits, saving easily $80 - $100. The US Customs agent even waived the "required" $2.85 duty per liter. He was surprisingly much nicer than the Canadian Customs agent, who demanded a birth certificate, a certificate certifying the birth certificate, the presence of my parents to certify the certified birth certificate, and a certificate certifying my parents are really my parents. Either that or a passport. Tax-free BoozeThe event was at the Renaissance Center, located in the heart of downtown. Despite being the tallest building in 100-square miles, it was surprisngly difficult to find, especially if you're unfamiliar Detroitonese, the language of the locals. They call it the "Ren Cen" and I'm surprised that any out-of-towner would find it. The Ren CenArriving there at 8:30, it was a bit disappointing to have to shell out $12.00 for parking. But such a high price does offer us protection from the Linux Crashers, who have a hard enough time getting a car to go downtown, let alone money to pay for parking. You know who I'm talking about, right? Those basement-dwelling fanboys who go to Microsoft conferences armed with Ubuntoo discs and try to dissuade people from attending because the carpet is not open source. They actually used to protest the building being "closed source," until someone pointed them to the city planning department for architectural diagrams. Coming in so late offered one other large disadvantage: missing out on many of the cooler freebies given out by the vendors. Here's a quick classification/rarity guide on the Detroit Launch Event vendor free stuff: Laser Pen (Rare) - Offered by Berbee, this was by far the coolest give-away. Only a few lucky attendies scored this combination pen/laser pointer. Surprisingly, no one abused these devices during the sessions. Blinking Yo-yo (Rare) - I somehow managed to get one of these. It was really cool until I realized it was not a "sleeper" yo-yo, so I gave it away to a colleague. Blinking HP Necklace (Uncommon) - About a third of the attendees had these, leading to two simultaneous yet conflicting feelings: "those are incredibly tacky" and "I wish I had one." Quest Software Weeble (Uncommon) - I don't know what these were called actually, but it was just a yellow cotton ball with paper feet and plastic eyes glued on. Despite having an uncommon rarity, no one really wanted these. Intel Mints (Uncommon) - These were in a neat, small metal container. They are borderline rare, mostly because you had to actually talk to the rep to get one. They were not just lying out like everything else. Pens (Common) - A handful of vendors were giving these away, giving to a good variety of pens. All however were cheap and plastic. Post-It Pads (Common) - Surprisingly, only one vendor was giving these away. Probably a good thing, just one less thing to end up in the landfill after the event. Fortunately, there was plenty of free continental breakfast food. A good variety of bagels, danishes, and other pastries. The most notable thing from breakfast (and possibly even the day) was the itsy-bitsy jars of honey. They are about half the size of the mini-ja[...]

MySQL 5.0: Still A "Toy" RDBMS

Wed, 26 Oct 2005 14:30:00 GMT

"Ha," an email from a colleague started, "I think you can finally admit that MySQL is ready to compete with the big boys!" I rolled my eyes and let out a skeptical "uh huh." His email continued, "Check out Version 5. They now have views, stored procedures, and triggers." My colleague has been a MySQL fan since day one. He loves the fact that it's free and open source and could never quite understand why anyone would spend tens of thousands of dollars on something else. But then again, he has never really had an interest in understanding; data management just isn't his "thing." Thankfully, he readily admits this and stays far, far away from anything to do with databases, leaving all of that "stuff" to the experts. No less, he'll still cheers whenever there's a MySQL "victory." it is, after all, free and open source. Data professionals have traditionally relegated MySQL as a "toy" relational database management system (RDBMS). Don't get me wrong, it's perfectly suitable for blogs, message boards, and similar applications. But despite what its proponents claim, it has always been a non-choice for data management in an information-system. This is not a criticism of the "free open source" aspect of the product, but of its creators. The MySQL developers claim to have built a reliable RDBMS yet seem to lack a thorough understanding of RDBMS fundamentals, namely data integrity. Furthermore, they will often surrogate their ignorance with arrogance. Consider, for example, their documentation on invalid data [emphasis added]: MySQL allows you to store certain incorrect date values into DATE and DATETIME columns (such as '2000-02-31' or '2000-02-00'). The idea is that it's not the job of the SQL server [sic] to validate dates. Wait a minute. It's not the job of the RDBMS to ensure data are valid?!? One of the greatest revelations in information systems is that applications are not good at managing their data: they change too frequently are too-bug prone. It just doesn't work. That's the whole point of a DBMS; it ensures that data are typed and valid according to business rules (i.e. an employee can't have -65 dependents). But I digress. This is the 5.0 release. They've added views. They've added stored procedures. They've added triggers. Maybe things have changed. I thought I'd check out MySQL 5.0 first hand, so I visited their website and downloaded the product. I have to say, the installation process was painless. It even defaulted to and recommended "strict mode," which apparently disallows the invalid dates as seen above. This is certainly progress! After it installed, I fired up the MySQL prompt and started hackin' around. mysql> CREATE DATABASE ALEXP; Query OK, 1 row affected (0.00 sec) mysql> USE ALEXP; Database changed mysql> CREATE TABLE HELLO ( -> WORLD VARCHAR(15) NOT NULL PRIMARY KEY, -> CONSTRAINT CK_HELLO CHECK (WORLD = 'Hello World') -> ); Query OK, 0 rows affected (0.14 sec) Wow! I'm impressed! MySQL 5.0 has check constraints! Maybe I was wrong about these guys ... mysql> INSERT INTO HELLO(WORLD) VALUES('Hi World'); Query OK, 1 row affected (0.05 sec) Err … umm … wait a minute. You did just see me put that check constraint on the HELLO table, right? It's not a very complicated check, maybe, I did it wrong? mysql> SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS -> WHERE TABLE_NAME='HELLO'; +--------------------+-------------------+-----------------+--------------+------------+-----------------+ | CONSTRAINT_CATALOG | CONSTRAINT_SCHEMA | CONSTRAINT_NAME | TABLE_SCHEMA | TABLE_NAME | CONSTRAINT_TYPE | +--------------------+-------------------+-----------------+--------------+------------+-----------------+ | NULL | alexp | PRIMARY | alexp | hello | PRIMARY KEY | +--------------------+-------------------+-----------------+--------------+------------+----------------[...]

"When Should I Use SQL-Server CLR User Definied Types (UDT)?"

Thu, 20 Oct 2005 15:26:00 GMT

No one has asked me that question just yet, but with the release of SQL Server 2005 just around the corner, I'm sure a handful of people will. Unlike regular User Defined Types, CLR UDTs are a new feature of SQL Server 2005 that allows one to create a .NET class and use it as a column datatype. As long as a few requirements are followed, one can create any class with any number of properties and methods and use that class as a CLR UDT. Generally, when a new feature is introduced with a product, it can be a bit of a challenge to know when and how to use that feature. Fortunately, with SQL Server's CLR UDTs, knowing when to use them is pretty clear: Never. Let me repeat that. Never. You should never use SQL Server CLR User Defined Types. I'm pretty sure that this answer will just lead to more questions, so allow me to answer a few follow-up questions I'd anticipate. Why Not?CLR UDTs violate a fundamental principle of relational databases: a relationship's underlying domains must contain only atomic values. In other words, the columns on a table can contain only scalar values. No arrays. No sub-tables. And, most certainly, no classes or structures. Remember all the different levels of normalization? This is the first normal form, you know, the "duh" one. This is a big thing. One can't just go and fudge a tried-and-true, mathematically-validated, theoretically-sound concept and "add and change stuff to it 'cause it'll be cool." Think of how much your car would love driving on a road made of stained glass blocks three years after it was built by an engineer who thought it'd look better. Deviating so grossly from the relational model will bring as much joy as a dilapidated glass road. Take Oracle's foray into relational abuse: nested tables. I don't believe that there has ever been a single, successful implementation of that abomination. Sure, it may work out of the box, but after a year or two of use and maintenance, it decays into a tangled mess of redundancy and "synch" procedures -- both completely unnecessary with a normalized relational model. And if that doesn't convince you, just think of having to change that CLR UDT. How easy do you think it would be to add a property to the class representing a few million rows of binary-serialized objects? And, trust me, it won't be nearly as easy as you think. But wouldn't I want to share my .NET code so I don't have to duplicate logic?This is always a novel goal, but an impossible one. A good system (remember, good means maintainable by other people) has no choice but to duplicate, triplicate, or even-more-licate business logic. Validation is the best example of this. If "Account Number" is a seven-digit required field, it should be declared as CHAR(7) NOT NULL in the database and have some client-side code to validate it was entered as seven digits. If the system allows data entry in other places, by other means, that means more duplication of the "Account Number" logic. By trying to share business logic between all of the tiers of the application, you end up with a tangled mess of a system. I have illustrated this in the diagram below. As you can see, the diagram on the right is a nicely structure three-tier architecture. The system on the right is the result of someone trying to share business logic between tiers, making a horribly tangled mess. One can expect to end up with the latter system by using CLR UDTs. Never?!? How can there never, ever be an application of CLR UDTs?Though I may not live by the cliché "never say never," I do follow the "never say 'never, ever'" rule. The only possible time where one might possibly want to use this feature is for developing non-data applications. But therein lies the crux: why would one develop a non-data application using SQL Server? There are certainly better tools out there for what the non-data application ne[...]

"What's the Point of [SQL Server] User-Defined Types?"

Fri, 07 Oct 2005 19:57:00 GMT

I'm asked that question every now and then from other developers who've played around in SQL Server Enterprise Manager and noticed the "User Defined Data Types" tab under their database. UDT seem a bit strange and pointless because they do not allow one to define (as one might expect) a data structure with more than one related data element. A UDT consists simply of a name and a base type (INT, VARCHAR(6), etc). So why then would one use a UDT? It all has to do with a fundamental concept of data known as "domains." I'm not referring to a dot-com type domain, but a domain in the mathematical sense of restricting the value of a particular value. For example, the domain of x for "f(x) = 1/x" is "!=0". We don't get domains in C++ / C# / VB / etc; all we have are types (integer, date, string, etc). But we're used to not having this; everyone knows you need to check if "x != 0" before trying to divide by x.  Imagine how much less coding (and related bugs) we'd have if trying to assign "0" to "x" threw an exception from the start, instead of in the middle. That's exactly what you can (and should) be doing with your databases.  When I start on this same explanation to others, it turns out a lot don't quite understand what check constraints are. Basically, check constraints are used to define the domain of a column to ensure that a row can only contain valid data according to the business rules. For example, your Products table should have a check constraint on the Price column, requiring it to be greater than zero (this would cause an exception to be raised if you tried to update the price to zero). Here's another example of some code: CREATE TABLE [Transactions] (  [Transaction_Id] INT IDENTITY(1,1) PRIMARY KEY NOT NULL,  [Transaction_Type] VARCHAR(5) NOT NULL     CHECK ([Transaction_Type] IN ('Debit','Credit','Escrow')),  [Transaction_Amount] DECIMAL(4,2) NOT NULL    CHECK ([Transaction_Amount] <> 0),  [Reference_Code] CHAR(5)    CHECK ([Reference_Code] LIKE '[A-Z][ A-Z][A-Z][A-Z][A-Z]'))) Get the idea? Each column has a constraint to ensure only valid data is allowed in the table. This way, there is no way that [Reference_Code] could contain anything but a five character string of upper case letters. No need to write code to test it, no need to ever validate it (except maybe on the data entry form so that the user doesn't see an ugly exception message), and no need to assume that it will be anything but that. Now, immagine that you wanted to have the same [Reference_Code] attribute throughout your database. You'd have to define that check constraint time and time again. If the rules ever changed, you'd need to change it in every place. That's where UDTs come into place. UDTs are the SQL Server imlementation of domains. If you have a common data element that will be used throughout the system, then it should be a UDT. Account number, Username, Order Number, etc; all should be UDT. When you define these types, you can easily apply rules (which are essentially just check constraints that apply whenever the type is used) to the type, and have it automatically enforced throughout the system. It's really easy to do. I'll use the SQL 2005 syntax, but you can do the same things in 2000 using sp_addtype and sp_addrule: CREATE TYPE USERNAME FROM VARCHAR(20)GO CREATE RULE USERNAME_Domain    AS @Username = LTRIM(RTRIM(@Username))   AND LOWER(@Username) NOT IN ('admin','administrator','guest')GO EXEC sp_bindrule 'USERNAME_Domain', 'USERNAME'GO And that's it. Now you can use the type throughout the database just as you normally would, and you'll never need to check or verify to make sure that someone slipped in an invalid value ... CREATE TABLE [User_Logons] (  [Username] USERNAME NO[...]

Pounding A Nail: Old Shoe or Glass Bottle?

Wed, 25 May 2005 21:03:00 GMT

"A client has asked me to build and install a custom shelving system. I'm at the point where I need to nail it, but I'm not sure what to use to pound the nails in. Should I use an old shoe or a glass bottle? How would you answer the question? a) It depends. If you are looking to pound a small (20lb) nail in something like drywall, you'll find it much easier to use the bottle, especially if the shoe is dirty. However, if you are trying to drive a heavy nail into some wood, go with the shoe: the bottle with shatter in your hand. b) There is something fundamentally wrong with the way you are building; you need to use real tools. Yes, it may involve a trip to the toolbox (or even to the hardware store), but doing it the right way is going to save a lot of time, money, and aggravation through the lifecycle of your product. You need to stop building things for money until you understand the basics of construction. I would hope that just about any sane person would choose something close to (b). Sure, it may seem a bit harsh, but think about it from the customer prospective: how would you feel if your carpenter asked such a question? I find it a bit disturbing, however, that this attitude is not prevalent in software development. In fact, from what I can tell, it seems to be discouraged.  I've been participating in Usenet/forums/lists for a decade now, asking programming questions and helping out others who have questions of their own. If some one asks a question that demonstrates the complete absurdity of their design, I'll generally reply with my (quite candid) opinion on their design. To give you an idea what I'm talking about, here's something I remember seeing a while back (from memory). Subject: Aggregates HelpI have a table that stores test results for milling machines. Each test consists of N-runs conducted by a measurer going M-trials each. I have this information represented in a varchar column in the following format: "44:1,5,4;23:2,4,9;14:1,4,3".  When the column is read into the class, it is converted into a jagged array: ( (44,(1,5,4)), (23,(2,4,9)), (14,(1,4,3)) ) of runs (3 of them) and measurers(Ids of 44,23,14) and trials (1,5,4 and 2,4,9 and 1,4,3). One of the reports I have is a deficiency report, which predicts which machines may fail. To run this, I have a report class that loads up the appropriate tests and processes the information. However, this is taking longer and longer to run. I'm thinking that running it in a stored procedure will be quicker. I can figure out how to get an "array" in SQL with a table variable, but how do I make a jagged array? Any ideas? Some of the folks on in the list took it as a fun challenge, going back and forth with how deficiencies are calculated, and providing some incredibly brilliant ways of solving this problem. Not I, though. My response was something to the effect of … This is quite possibly the worst way you could store this data in a database. No, seriously. They had better ways of doing this over thirty years ago. You have created a horrible problem that is only starting to rear its ugly head. It will only continue to get horribly worse, costing your company more and more money to maintain. You need to drop everything your doing right now and take a trip to your bookstore to get a database book. I recommend INTRODUCTION TO DATABASE SYSTEMS by DATE, but at this point anything should do. For the sake of everyone who will maintain your future code, don't touch a database until you understand how big of a mess you've created. How do you think you would have responded to that post? Would you have taken the challenge to think about how to solve the problem or just take the opportunity to school the poster? If you say the former, then you probably think I'm a grumpy curmudgeon (more [...]

DNA, XP, SOA, ESB, ETC Are Dead; FAD is the Future

Thu, 05 May 2005 14:35:00 GMT

I've come across a truly revolutionary software development methodology called Front Ahead Design (FAD). Essentially, it's a fundamental paradigm shift over "traditional" and "neo" ways of building software. Not only does it surpass every software development methodology out there, it solves every problem there is to building software (and then some). But don't take my word for it, here are the Top Five fundamentals ... I. Front Ahead DesignThe essence of FAD is conveyed directly in its name: design your front-end/user-interface first, ahead of everything else. The customer could care less what's behind the scenes, so long as it looks good and does what its supposed to. Deliver a working front-end first and then Do What It Takes to fill in the functionality gaps. II. Do What It TakesOther methodologies are great at delivering excuses. How many times have you heard (or have been told) "we can't do that here because it could throw off the whole design?"  In FAD, you just do it (that would have been the bullet point, but Nike has it trademarked). To get it done, you Do What It Takes. Your customer will love you. III. Code Light, Not "Right"A traditional methodology calls a complex framework with layer after layer of objects. In those ways, adding a simple value to a form can be a monumental task, requiring it to be added to every single layer. Does that sound right? Proponents of the other methodologies will tell you it is, but what about your customer? With FAD, you just Do What It Takes to add the functionality to your interface. No more. IV. "Throw Away" DiagramsThink of all the Visio diagrams you've drawn over the years. Sequence diagrams, context diagrams, flow charts, and so on. Was that really productive? Did your customer ever see any of those? Were those diagrams even relevant after the system was finally developed? Didn't think so. In FAD, all diagrams are made on a disposable medium. Whiteboards, napkins, even your forearms work. And there is no formal modeling language to battle with: just Do What It Takes to draw and explain your design to other developers. V. Life Is Short (a.k.a. Patchwork)The average software system has a life expectancy of seven years. No matter how "properly" the system is designed from the start, within the first year of its life, maintenance programmers unfamiliar with the complex architecture (and having no help from out-of-date documentation) will turn the system into a complete mess with bug fixes and change requests. In FAD, this isn't even a concern. We know the short life span of a system and develop every feature (from the interface) as a patch. Maintenance programmers can come in and Do What It Takes to add their patches. In FAD, we don't even try to stop the aging process. We encourage it. There's quite a few more fundamentals, but that's all I've got time for today. I'm incredibly busy trying to finish a book on the topic (halfway through it!!). Hopefully, it'll make it as the first FAD book. I hear some other big names are in the first FAD book race, too. I also came across some community sites: - .TEXT blogging site, open for anyone to blog about FAD - general informational site with articles, help, discussion, and tools - gallery of successful projects (from small business to enterprise) that have successfully used FAD They're all under construction, but I'm helping a lot with the, so let me know if you'd like to be one of the FAD bloggers. Next article: a comparison of the FAD design tools, including H/YPe and nFAD.[...]

Comicality Inflation

Tue, 03 May 2005 23:27:00 GMT

 My post the other day (Computer Programmer Inflation) got me thinking of another type of inflation that I've observed over the years: Comicality Inflation. Like other types of inflation, it's not as if things have gotten funnier, it’s just the terms we use to describe them.   Back in the day (and I'm talking the day; as in, before electricity and all that), if a friend sent us a letter that we found entertaining, we would simply compliment it when we penned our reply: Archibald, I must concede your quip about poor Rutherford’s embarrassing gaffe was quite witty and remarkably entertaining.   Thankfully, those days are long gone, replaced by the ever-so-simple (though dreadfully less elegant) onomatopoeic interjection "haha" (and it's cousins, "hehe", "hoho", etc). One of the great features of "haha" is that it is an expandojective: the intensity of its meaning is proportional to its length. ha - mildly amusing, possibly causing a very soft chuckle haha - funny, causing at a minimum a chuckle, but most likely a snort ... a-hahahahahaha - busting out in full-blown laughter requiring a pre-laughter breath (hence, the "a-") By the way, if anyone knows the proper term for such a word, please advise. It's not as if "haha" is the only expandojective. There’s "aaagh", "aarrg", "reeeaaaaly", "zzz", and so on. These magical fantastic words truly need their own category.   Anyway, it would seem that "haha" would be the perfect way of indicating the comicality of something you read.  After all, it expands as things get funnier and it even has lots of room for personalization ("ho-ha-ha","teehehe",etc). Alas, it is not; some one had to come along and create the acronym "LOL".   On its own, I don't think that LOL (Laugh Out Loud) is a terrible thing. After all, there's no official "ha" scale and it's quite hard to tell if "hahaha" means "I laughed out loud" or "I had a series of snickers."  It's the abuse and extension of LOL that really offends me.   First, let’s consider the abuse. How many times have you seen people reply with "LOL" and you know, for a fact, that what you wrote wasn’t possibly that funny? And if you think exaggerative flattery is excusable, consider the typical teenage instant message conversation: princessGurl1924: heya becca, hows it goin, lolangelKitty77: omg, lol it's goin pretty good lol. u??princessGurl1924: lol good good!~!! If I knew anyone who laughed that much in real life, I would suggest that they have some serious mental disorder. Or that they are high on some cocktail of illicit narcotics.   I know that I'm not alone in this observation. I've noticed that quite a few people have started to differentiate between LOL-funny and Laugh-Out-Loud-funny by replacing the latter with the absurdly redundant "LLOL" (Literally Laugh Out Loud). I swear, if we ever begin to figuratively LLOL, I will have no choice but to become a Luddite.   I’d have to say that the extensions of LOL offend me the most. Let's consider the most prevalent:   ROFL (Rolling On the Floor Laughing) - I've seen a lot of funny stuff in my day, and, let me tell you, I'm a laugher. But I have never seen anything that was so funny that I dropped to the floor in a fit of uncontrollable laughter. And if you ever have, it certainly wasn’t while reading something on the internet.   LMAO (Laughing My Ass Off) - Some might say that this is not fair game because the colloquialism "laughing my ass off" had existed prior to the Internet. No less, I still consider the acronymization to be a direct result of LOL.   ROFLMAO (Rolling On the Floor Laughing My Ass Off) - A total unnecessary expansion of an expansion. Since when was an uncontroll[...]

Computer Programmer Inflation

Fri, 29 Apr 2005 18:17:00 GMT

I was thinking the other day about the changes over the years in what we call people who write computer programs. Back in the day, we called these folks computer programmers. A rather fitting title, one would suppose, for a person who programs computers. But one would suppose wrong.

Shortly after computer programmer became the "official" title, someone, somewhere, somehow decided that it wasn't enough. After all, computer programmers do more then program, they analyze a problem and then write a program. They should, therefore, be titled programmer/analysts. One would suppose that such analysis is an implicit part of the job, much like how writers need to think about something before they actually write it. But one would suppose wrong.

Unlike the computer programmer title, programmer/analyst seemed to stick around for quite a while. In fact, it was only fairly recently that we all became software developers (or, developers for short). The change this time around was all about image; you gotta admit how much sexier developer sounds over programmer. Certainly one would suppose it's pretty hard to "sex up" an industry whose Steves out number its women (see the Steve Rule). But one would suppose wrong.

Believe it or not, developer is on its way out and we're in the middle of yet another title change. If you think about it, the problem with developer is that, if any one asked what a "developer" is, you'd have to expand it to software developer. Software == Computers == Programming == Nerdy. We can't have that!

This is where the title solution developer comes in. We're the guys who you call when you have a problem. Doesn't matter what the problem is, we will develop a solution. Heck, we can even develop solutions (by programming a computer) for problems that don't exist. We're that good.

But where do we go from here? First, we need to reach the maximum level of ambiguity possible. I'm not an expert at coming up with job titles, but I suspect solution specialist is a step in the right direction. Of course, once we've gone all the way to one side, the only place we can really go is to other extreme: a way more overblown/descriptive/nerdy sounding name than needed. When solution specialist (or whatever) expires, I really hope the replacement will end with -ologist. I would really like to be an -ologist of some sort. You know you'd like it, too.

The "Steve Rule" Proved True Again

Fri, 22 Apr 2005 18:06:00 GMT

Yesterday on The Daily WTF, I mentioned something I called the Steve Rule: in a random sample of programmers, there will be more named Steve then there will be females.

What's ironic is that I didn't mean that as a joke. In my personal experience, every group of programmers with fifteen or more always had more guys with the same name than women. No, it's not always Steve, but that was the name when I first made the observation (three Steves, one woman).

I thought I'd test the "Steve Rule" once again using the Weblogs.Asp.Net bloggers. To do this, I looked at the OPML [XML], and used the title of 258 blogs. There were 139 I skipped because either I couldn't tell the gender (no offense, Suresh Behera, et al.) or the title didn't have a name (e.g. Models and Hacks).

Of the blogs, I found six female bloggers:

For the guys, I was only able to find four different Steves. But, there were seven different names (with slight variations) that outnumbered the girls

  • Andrew (7)
  • Chris (7)
  • Dave (7)
  • Jason (7)
  • John (7)
  • Robert (7)
  • Scott (9).

So, for us Weblogs.Asp.Net bloggers, it looks like we are goverened by the "Scott Rule." How about that?


Mon, 04 Apr 2005 13:29:00 GMT

Thought I'd share my review of James Avery's latest book ...

If you spend a fair amount of time writing code in Visual Studio.NET, then Visual Studio Hacks will definitely improve your productivity in writing, debugging, and maintaining code. You'll find everything from using (and the practical uses of) built-in features (such as the Clipboard ring) to in-depth explanations of downloadable add-ins.

One thing I love about the book is how easy the hacks are to implement. For example I've always been annoyed at the output from the build results window but never took the time to even think about changing it. It seemed to be just easier to deal with it than fix it. After reading Hack #35 (Modify the Build Output and Navigate the Results), I'm confident I can take a few minutes to write a macro and customize it as needed.

Worried about difficulty in writing a macro? Hack #51 explains it step-by-step.

Although I don't consider myself to be an Über-user of VS.NET, I do use the development environment quite a bit and know my way around fairly well. That said, Visual Studio Hacks has definitely augmented my knowledge of not only what VS.NET does, but what *good* 3rd party add-ins are available. I've had too many bad experiences with *bad* add-ins killing half-a-day of work after requiring a reinstall of VS.NET. The ones listed here work well and are documented well.

By the way, I have to say that I liked Hack #7 (Make Pasting into Visual Studio Easier) the most. But then again, it's pretty hard *not* like a hack that you contributed to the book ;-).

CommunityServer: Custom Homepage With Thread Listing

Sun, 03 Apr 2005 03:36:00 GMT

In this Community Server tip, I'll describe how to make a quick & easy home page that displays posts from one or more forums on one page.. If you're concerned that new visitors to your community will be turned off by the default listing of forums when they go to, then this just may be for you. I think you'll find that most of the time you spend on this tip will be in the actual design of the home page. This is what I do for Step One - Layout The standard default.aspx that comes with Community Server contains the title banner, site navigation tabs, and an area with some explanatory copy ("Community Server is a rich knowledge management and collaboration platform designed ..."). Keep as much or as little as you want; I opted to replace the main area with a two column view: a left "side-bar" and a center posts. Before you start coding, you may find it easier to put in dummy place holder post text where your normal posts would appear. Step Two - Default.aspx Header At the top of the top of the page, you'll want to make sure that you have the appropriate references to the Community Server components. Here is what I have at the top of my file. Some of these may or may not be already in the existing default.aspx: <%@ Page SmartNavigation="False" Language="VB" %> <%@ Register TagPrefix="CS" Namespace="CommunityServer.Controls" Assembly="CommunityServer.Controls" %> <%@ Import Namespace="CommunityServer.Galleries.Components" %> <%@ Import Namespace="CommunityServer.Blogs.Components" %> <%@ Import Namespace="CommunityServer.Components" %> <%@ Import Namespace="CommunityServer" %> <%@ Import Namespace="CommunityServer.Discussions.Components" %> Note that I have Language="VB" at the top. Although Community Server is entirely C#, I prefer to code in VB and will whenever I get a chance. Ahh, the power of .NET. Step Three - Add The Code Just place this block of code and place it under your page header Import statements: The code is very basic. It declares two ThreadSets (a group of threads within a forum), fills them when the page loads, and then binds the thread sets to the repeater (which we will build in Step Four). Since the GetThreads() arguments are a bit intimidating, I'll explain them one-by-one, in order: forumID - The ID of the forum to get the threads for. For my site, 12 is the ID of the main forum and 18 is of the sidebar.You will definitely need to change this to the forum you'd like to display. pageIndex - Because a forum can contain more than one "page" of threads, you need to specify which page to retrieve. We're getting the threads from the first page, which is 0. pageSize - The number of threads to display per page. Both of my thread sets show five per page. user - The user who is requesting the threads. If you wanted, you could use the CurrentUser. I chose to display the threads as an anonymous user would have seen. I honestly think it makes little d[...]

What Exactly Is An Exceptional Circumstance, Anyway?

Fri, 01 Apr 2005 21:33:00 GMT

I think that there's a general consensus out there that Exceptions should be limited to exceptional circumstances. But being that "exceptional" is a rather subjective adjective, there's a bit of a gray area as what is and isn't the appropriate use of Exceptions. Let's start with an inappropriate use that we can all agree too. And I can think of no better place to find such an example than Although that particular block of code doesn't exactly deal with Throwing exceptions, it is a very bad way of handling exceptions. To the other extreme, exceptions are very appropriate for handling environment failure. For example, if your database throws "TABLE NOT FOUND," that would be the time to catch, repackage, and throw an exception. But it's in the middle where there's a bit of disagreement. One area in particular I'd like to address in this post is exceptions to business rules. I mentioned this as an appropriate before, but noticed there was quite a bit of disagreement with that. But the fact of the matter is, exceptions really are the best way to deal with business rule exceptions. Here's why. Let's consider a very simple business rule: an auction bid may be placed if and only if (a) the bid amount is higher than the current bid, (b) the auction has started, and (c) the auction has not ended. Because these rules (especially b and c) are domain constraints (i.e. they restrict the range of acceptable values), they are best handled by the Preserver of Data Integrity (some call this the "database"). To accomplish this validation, we'll use a stored procedure with an interface like this: procedure Place_Bid ( @Auction_Num char(12), @Bidder_Id int, @Bid_Amt money ) Now let's consider the layers of abstraction it actually takes to go from the user clicking the "Place Bid" button to the stored procedure being called:•PlaceBidButton_Click()•AuctionAgent.PlaceBid()•IAuction.PlaceBid()•--- physical tier boundary --•IAuction.PlaceBid()•Auction.PlaceBid()•AuctionDataAgent.PlaceBid()•SqlHelper.ExecuteCommand()•--- physical tier boundary --•procedure Place_Bid Without using exceptions, it gets pretty ugly passing the message "Bid cannot be placed after auction has closed" from the stored procedure all the way back to the web page. Here's two popular ways of doing this: Return Codes - Have every method that could potentially fail return some sort of value. True/False is the most common but rarely provides enough information about the failure. Our PlaceBid function would need four different return codes: Success, Fail-LowBid, Fail-EarlyBid, Fail-LateBid. Of course, this technique fails when your method may need to actually return something other than the return code. Class Level Checking - For each of classes, add property called "LastError." This will contain an Error object that contains information about the last error (if one occurred). Simply check it after each operation. Output Params - Add an out paramter to every method to pass back an ErrorObject. This is similar to the aforementioned technique except it is on the method-level. In all three cases, you need to manually "bubble" up the message from method to method. As you can imagine, this adds lots and lots of needless "plumbing" code intertwined with your business logic. Since it's at the method level, all it takes is one developer to not code a method to return the right code. The proper way of handling the Bid exception is, naturally, with Exceptions. When you raise the error in the stored procedure code, indicate that the message is a business rule exception intended to be displayed ba[...]

Hi, My Name is Alex, and I am a Resource Waster

Wed, 30 Mar 2005 18:10:00 GMT

Oleg "XML MVP" Tkachenko nailed it. I'm a "resource waster" and proud of it. I think the time has come where we can actually afford to trade off performance for maintainability. Although many Über-experts (Yourdon, Jackson, etc.) have been saying this for thirty years, I wasn't a true believer myself until a few years ago when I realized exactly how fast hundreds of millions of clock cycles per second is. It's absurdly fast. But now-a-days we don't deal in 100,000,000's of clock cycles. Heck, you can't even give a few hundred megahertz computer to charity anymore. No, we're in the age of billions of cycles per second, twenty-four stage pipelines (actually, I think that was old news as of Pentium III), multi/dual-core-processors, and so on. A new consumer desktop computer from Dell can perform a good five, maybe six billion floating point operations per second. Let's put that in perspective. If you were to print out 5,000,000,000 floating point problems (482.224 x 127.8038) at fifty per page, the stack of paper would be 6.2 miles (10km) tall*. That's the equivalent of twenty-six Empire State Buildings floor to tip. All computed in a single second. Is that fast enough for ya? Yet some folks, like Mr. Tkachenko, still worry about speed. In response to my post about how exceptions really aren't bad to use, Oleg countered, explaining exactly what happens when you throw an exception: Grab a stack trace by interpreting metadata emitted by the compiler to guide our stack unwind. Run through a chain of handlers up the stack, calling each handler twice. Compensate for mismatches between SEH, C++ and managed exceptions. Allocate a managed Exception instance and run its constructor. Most likely, this involves looking up resources for the various error messages. Probably take a trip through the OS kernel. Often take a hardware exception. Notify any attached debuggers, profilers, vectored exception handlers and other interested parties I have to admit, that does sound rather intimidating. I honestly had no idea that it did all that for just a lousy exception. But let's put it into perspective: billions of operations per second. Yes, this sounds like a lot of stuff for an exception. Yes, it is a lot. Yes, your computer can handle all that ten times over so freakin' fast you wouldn't know it happened. Exceptions are the ideal way of dealing with exceptional circumstances. If your database stored procedure yells "Documents status may not be altered while an approval is pending," exceptions are the simplest way to wrap that up, jump across your tiers, and display the message directly to the client. Any other way of doing it would be needlessly more complicated. So next time someone tells you "it's faster this way," put it in the "Gigahertz Perspective" and counter with "but, is it better?" (*) 5,000,000,000 FLOPS  / 50 lines per page  * 0.1mm per page (standard for 20lb)  / 1,000,000 mm per km  = 10 km Empire State Building = 1250 feet[...]

A Word from the "Wise": Don't Use Exceptions

Tue, 29 Mar 2005 15:12:00 GMT

A while back, I went to a local .NET event which had a number of presentations given on a variety of topics. I attended an intermediate-level talk presented by an out-of-town MVP that was entitled "Advanced .Net Programming," or something like that. One of the sub-topics discussed was error handling, to which our MVP had some rather simple advice: don't throw exceptions. This seemed to be some rather peculiar advice, especially considering how exception handling was such an integral part of the .NET Framework, so I interrupted the speaker to ask for some clarification. He explained a bit further saying essentially that exceptions kill application performance and that you should use return codes because they are faster. For most attendees, such reasoning would suffice. But not this one. No, it still didn't make sense. After all, I've used exceptions quite extensively to pass messages from the database all the way to the client, and I've never noticed a performance problem. Could it be that I perceive things in fast-motion? Maybe I'm oblivious to the fact that all of my applications are sluggish because an hour to me is what a minute it is to you? Something's not right. Either our MVP cares a bit too much about speed or my perception of time is completely out of whack. What better than a few objective tests to figure this out? So I created a console app and started coding ... First, I needed some code that did nothing. Well, not nothing, but nothing important. Sub DoNothingImportant()  Dim x, y, z As Integer y = 8432 x = 17751 z = (y + 112) * (x - 2040) / (Math.PI)  Dim s As String s = "blah blah blooh" s = s & "beep"End Sub This shouldn't take too long to run, right? Let's find out: Console.WriteLine(Now().ToString("hh:mm:ss.fffffff"))DoNothingImportant()Console.WriteLine(Now().ToString("hh:mm:ss.fffffff")) Output:10:35:33.593006210:35:33.5930062 Sheesh, it goes so fast I can't even measure it. For fun, I thought I'd see how fast I could do the simple arithmetic in DoNothingImportant(). Unfortunately, manual decimal long division is not like riding a bike. As it turns out, I have no idea how to go about dividing 3.1428 into 134,234,784 by hand. I'm ashamed and I'm embarrassed. For this very reason, I have decided not share with you how long the multiplication portion took me. Did I mention I have a minor in mathematics? To make myself feel a little better, let's watch the computer choke on doing this arithmetic 100 times in a row! Console.WriteLine(Now().ToString("hh:mm:ss.fffffff"))For i As Integer = 1 To 100 DoNothingImportant()NextConsole.WriteLine(Now().ToString("hh:mm:ss.fffffff")) Output:10:37:05.509676410:37:05.5096764 Damn you gigahertz! Back to the topic at hand though. Let's go kill our performance with exceptions: Sub ThrowException() Try Throw New Exception Catch ex As Exception Finally End TryEnd Sub Console.WriteLine(Now().ToString("hh:mm:ss.fffffff"))ThrowException()DoNothingImportant()Console.WriteLine(Now().ToString("hh:mm:ss.fffffff")) Output:10:40:18.545699010:40:18.5456990 Hmm. Weird. It looked instant to me and the computer. Ok, how about if we throw 10 exceptions. Console.WriteLine(Now().ToString("hh:mm:ss.fffffff"))For i As Integer = 1 To 10 ThrowException() DoNothingImportant()NextConsole.WriteLine(Now().ToString("hh:mm:ss.fffffff")) Output:10:46:11.712397410:46:11.7123974 Sheesh. This computer is frickin amazing. Ok, maybe it's not exceptions that kill performance, but nested exceptions. Let's find out: [...]