Subscribe: Mathew Nolton Blog
Added By: Feedage Forager Feedage Grade B rated
Language: English
business  cable  code  fail fast  file  mathew nolton  mathew  remove  set  soa  software  solution  system  web config  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Mathew Nolton Blog

Mathew Nolton Blog

Software dementia unleashed...


sp_lock2 and sp_lockcount

Fri, 15 May 2009 17:32:00 GMT

Here are two helpful procedures to help with sqlserver database work.  The code for both of these are based on the system stored procedures sys.sp_lock that comes built in the master database. build both of these in your master database if you want to use them in all databases across your database instance. sp_lock2 -- provides object_name for the object_id normally presented with sp_lock sp_lockcount -- provides a count of the locks. useful when doing etl processes and you are monitoring locks but do not care to receive the tens of thousands of lock data coming back...when all you want to know is the count. sp_lock2 codeUSE [master] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO -- ================================================================== -- Author: Mathew Nolton (based on sp_lock in sqlserver) -- create date: 05/15/2009 -- Description: Returns lock information similar to sp_lock but -- with additional object name information. -- ================================================================== create procedure [dbo].[sp_lock2] @spid1 int = NULL, /* server process id to check for locks */ @spid2 int = NULL /* other process id to check for locks */ as -- ====================================================================================================== -- set options -- ====================================================================================================== set nocount on set transaction isolation level read committed -- ====================================================================================================== -- do the work. -- ====================================================================================================== select convert (smallint, [req_spid]) as [spid], [rsc_dbid] as [dbid], db_name([rsc_dbid]) as [dbname], [rsc_objid] as [ObjId], object_name([rsc_objid]) as [ObjName], [rsc_indid] as [IndId], object_name([rsc_indid]) as [IndName], substring ([v].[name], 1, 4) as [Type], substring ([rsc_text], 1, 32) as [Resource], substring ([u].[name], 1, 8) as [Mode], substring ([x].[name], 1, 5) as [Status] from [master].[dbo].[syslockinfo] [s] inner join [master].[dbo].[spt_values] [v] on [s].[rsc_type] = [v].[number] inner join [master].[dbo].[spt_values] [x] on [s].[req_status] = [x].[number] inner join [master].[dbo].[spt_values] [u] on [s].[req_mode] + 1 = [u].[number] where [v].[type] = 'LR' and [x].[type] = 'LS' and [u].[type] = 'L' and ( @spid1 is null or [req_spid] in (@spid1, @spid2) ) order by [spid] sp_lockcount code-- ================================================================== -- Author: Mathew Nolton (based on sp_lock in sqlserver) -- create date: 05/15/2009 -- Description: Returns count of locks. -- ================================================================== create procedure [dbo].[sp_lockcount] @spid1 int = NULL, /* server process id to check for locks */ @spid2 int = NULL /* other process id to check for locks */ as -- ====================================================================================================== -- set options -- ====================================================================================================== set nocount on set transaction isolation level read committed -- ====================================================================================================== -- do the work. -- ====================================================================================================== select count(*) from [master].[d[...]

SqlScript for dropping a set of procedures, user-defined functions or views

Thu, 25 May 2006 18:37:00 GMT

I have been wanting to write some posts lately of some productivity tools and tricks. My last post was for a VS-Addin for updating the reference paths for all "selected" projects in your visual solution set. This post is for a sql script that enables you to drop a series of stored procedures, UDF's or Views. I thought about adding tables but then I would have to handle keys/constraints as well....maybe next time. PRINT N'Dropping procedures/functions...' GO PRINT N'' GO declare     @name varchar(256), @type varchar(2), @sql nvarchar(1000), @printText varchar(1000)   -- declare cursor declare localCursor cursor local fast_forward read_only for select      [name], [type] from  [sysobjects] where [type] in ('P','TF','FN','V') and (             [name] in             (                   '[Put procedure/view or udf name here]',                   '[Put another procedure/view or udf name here]'             )       )       order by [type],[name]   -- now open the cursor open localCursor fetch next from localCursor into @name, @type   -- now loop through building list. while @@fetch_status=0 begin       if( @type = 'P' )       begin             set @printText = 'Dropping procedure ' + @name + ' ...'             set @sql = 'drop procedure ' + @name       end       else if ( @type in ('TF','FN') )       begin             set @printText = 'Dropping function ' + @name + ' ...'             set @sql = 'drop function ' + @name       end       else if ( @type in ('V') )       begin             set @printText = 'Dropping view ' + @name + ' ...'             set @sql = 'drop view ' + @name       end             print @printText       exec sp_executesql @sql       fetch next from localCursor into @name, @type end close localCursor deallocate localCursor   HTH, -Mathew Nolton[...]

Add-In for updating all reference paths in a solution

Thu, 25 May 2006 12:35:00 GMT

Update Project References Add-In 

Visual Studio doesn't natively support the ability to update all selected project's within a Solution with the same reference paths. This is important (at least to me) if you have to download from your source control system a different version of your source code. Since you typically do not check in your project preferences into source control, you will be stuck with the mundane task of selecting your project, setting the reference path, select the next project, etc.

This add-in enables you to select a given set of projects within your solution, set your reference paths and make updates across all selected projects. For example:

You have a couple of options

  1. Completely Replace Existing References...enough said
  2. Do nothing if a reference exists (and append to end of list if it doesn't)
  3. Prepend to List...(note: if the reference already exists, it will be moved).
  4. Append to List...(note: if the reference already exists, it will be moved).

To use the add-in, copy the files UpdateReferencePaths.AddIn and UpdateReferencePaths.dll to your add-in directory. Typically, this is located at: C:\Documents and Settings\[Your User Name Here]\My Documents\Visual Studio 2005\Addins

Get the Add-In and the source code here:

-Mathew Nolton

Using skmRss.RssFeed

Tue, 27 Dec 2005 14:15:00 GMT

I have been evaluating the use of Scott Mitchell's RSSFeed control for a new version of my website (it's not done yet) for my consulting company that focuses on the Cable Sector of the Telecommunications Industry and I wanted to display relevant information for both Web Services and the Cable Sector. The control is very nice (and free). During my investigation I wanted to perform a couple of actions that were not quickly apparent but still quite doable once I dug a bit deeper. Specifically, I wanted to be able to: Keep all items to a single row in the control (without wordwrap). Display ToolTips To put all items in a single row, I noticed that it has a boolean called 'Wrap' for an ItemStyle; however, I could not get this to work so I took a nother approach. I modified the RSSDocument to shorten the title to display it correctly. Here is how I did it: First the code to wire it up: cableNewsFeed.DataSource = getList( [DataSource], __maxTitleLength, cableNewsFeed.MaxItems ); cableNewsFeed.DataBind(); Next the code to create the list: protected RssDocument getList( string dataSource, int maxTitleLength, int maxFeedLength ) {       // create the engine and get the xml document       RssEngine engine = new RssEngine();       RssDocument sourceDocument = engine.GetDataSource( dataSource );       // all of the properties on the document are 'Get' only, so I created a copy.       RssDocument results = new RssDocument( sourceDocument.Title, sourceDocument.Link,             sourceDocument.Description, sourceDocument.Version, sourceDocument.FeedType, sourceDocument.PubDate);       // get the max offset ( i pass it this in to the method based on the max items of control).       int countOffset = sourceDocument.Items.Count-1;       for( int i = 0; i < maxFeedLength && countOffset >= i; i++ )       {             // get the item to clone             RssItem item = sourceDocument.Items[i];             // shorten the title.             string title = getTitle( item.Title, maxTitleLength );             // add the 'New' item.             results.Items.Add( new RssItem( title, item.Link, item.Description,                   item.Author, item.Category, item.Guid, item.PubDate, item.RssEnclosure ) );       }       // return the results.       return results; } Method to create the title: protected string getTitle( string title, int maxLength ) {       int length = title.Length;       if( length > maxLength ) title = title.Substring(0,maxLength);       return title += "..."; } If I were really industrious I would have created the method based on the GDI+ MeasureString method. However, I am not guarenteed that everyone will have the font on their machine. I still think it is worthy of doing this, but maybe later. Next, I wanted the tool tip to display the description of the RSSFeed. To do this, I wired up the ItemDataBound event: cableNewsFeed.ItemDataBound+=new RssFeedItemEventHandler(cableNewsFeed_ItemDataBound); Now the method: private void cableNewsFeed_ItemDataBound(object sender, RssFeedItemEventArgs e) {       ( ( System.Web.UI.WebControls.WebControl )e.Item).ToolTip = e[...]

Creating your own XmlSerializer

Tue, 20 Dec 2005 23:25:00 GMT

Very recently I came across an issue that required the creation of a new class derived from XmlSerializer. For reasons I don't want to get into here, we serialize an object instance into XML and store it into a database column so that we can reconstitute it later. This is a great approach except for the issue of changing class definitions. Lastly, if you are just changing the definition of a top level class, then I suggest taking a look at XmlAttributeOverrides on msdn; however, if you are changing the definition of a class that aggregates other classes and one of your contained class has a different class definition, you need to look at using: XmlAttributeEventHandler, XmlElementEventHandler or XmlNodeEventHandler   These events allows you to control the creation of these internal aggregated classes. For example, if you have the following class definition for ItemOption that contains a collection of objects of type ProductAttribute and your ProductAttribute definition has changed, then this is a good candidate for creating your own XmlSerializer derived class. (Simplified For Brevity)       [Serializable]       public class ItemOption       {             protected string _name=null;             protected string _description=null;             protected ProductAttributeCollection _generalAttributes=new ProductAttributeCollection();               public ItemOption(){}               #region properties             [XmlElement ("Name")]             public string Name             {                   get{return _name;}                   set{_name=value;}             }             [XmlElement ("Description")]             public string Description             {                   get{return _description;}                   set{_description=value;}             }             [XmlArray("GeneralAttributes")]             [XmlArrayItem("Attribute")]             public ProductAttributeCollection GeneralAttributes             {                   get{return _generalAttributes;}                   set{_generalAttributes=value;}             }       }   For the purposes of brevity, let's say that all you did was change your ProductAttribute definition from using XmlAttribute to  XmlElement.For example:       [Serializable]       public abstract class ProductAttribute [...]

New Version of XmlPreCompiler

Thu, 28 Jul 2005 18:20:00 GMT

Well, I finally got around to making the XmlPreCompiler even easier to use. As some of you may or may not know, the XmlPreCompiler is a tool based on a tool Chris Sell's originally created to help developer's handle the xml serialization error:

      "File or assembly name ctewkx4b.dll, or one of its dependencies, was not found" exception?

The new version of the tool has the following improvements:

  1. AssemblyResolve. Typically, if you use the tool to check if a class can be serialized it is outside of the XmlPreCompiler's current directory structure and therefore, it cannot resolve dependencies. To get around this some of you would copy the assemblies into the XmlPreCompiler's bin directory. Now, I wired up the AssemblyResolve event and I do my best guess to find it for you.
  2. Added the ability with a single button click to test all types in your assembly. I forgot who requested this, but it was a good idea.
  3. Provided a TypeDetail window to show you different attributes of the class....e.g. IsAbstract, IsAnsi, IsClass, IsInterface.....
  4. Prrovided a Referenced Assembly window to show you all of the assemblies that the selected assembly references.

I am also in the process of making it an AddIn into the new Visual Studio 2005 (1/2 way done). I also am evaluating doing some disassembling of the code in a manner similar to reflector....I am mostly doing this for my own understanding but also for some things that I want to be able to do with classes in the future.

Anyway, download it and enjoy,

-Mathew Nolton

Part 2: Consulting in the Cable and Wireless Sectors....The Power of the Bundle....

Wed, 11 May 2005 12:50:00 GMT

The Cable sector as well as the entire Telecommunications industry has been expending a large amount of resources to sell bundled products and services. This means that instead of just offering Analog and Digital Cable Services, Cable and Telecommunication Companies are offering Cable, High Speed Internet, Digital Telephone, Digital Video Recorders, High Definition TV, etc. in a "Bundled" product.

The sale of a bundled products and services is easier for a customer requiring fewer mouse clicks and faster checkouts and a single monthly bill. The benefit to the Cable and Telecommunications companies is that is greatly enhances customer retention, revenue and market penetration as can be evidenced by this article.

 Cox Communications Announces First Quarter Financial Results for 2005

-Mathew Nolton

Previous Posts on Consulting in the Cable and Wireless Sectors:
Part 1: Consulting in the Cable and Wireless Sectors

Part 1: Consulting in the Cable and Wireless Sectors

Wed, 04 May 2005 13:34:00 GMT

Part 1...Commerce Solutions: The Cable and Wireless sectors are unique. Over the last several years I have done considerable consulting in these sectors. Most recently I have lead the design and architecture of a Channel Sales System that now has 65% of a Web presence for one of the larger companies in the cable sector. Much of the remaining 35% will convert and/or continue to leverage web services of this same system. We are also using this system for Interactive TV and Telephony. It is a blast. Prior to building this system and during the course of building this system, we evaluated a number of packaged commerce solutions. Many/most of these packaged "commerce" software rarely meets the needs of a cable company. That is why "Build" instead of "Buy" became such a strong consideration. Vendors often state that a Cable Company is not in the business of bulding software. True, but cable (and wireless companies) are in very competitive sectors and it is absolutely crucial that they control their own destiny. So, although a cable company is not in the business of writing software; it is in charge of remaining competitive. Some cable companies have gone down the vendor path only to do an about face. The issue is that being beholden to a vendor can really cost the company time to market....and Time-To-Market is crucial...and a driving factor for many businesses...not cost.. For example, let's say that a vendor is 6-12 monthis behind the curve; or the vendor panders to many industries instead of just Cable. This means that it is very difficult for a company to react quickly to changing market conditions. Nothing kills a marketing campaign such as a commerce solution that does not meet the needs of the business. Or that it does but with considerable customization. Furthermore, if you customize vendor software, you further exasperate the probem...because you will still have to upgrade and customizations typically have to be reapplied to an is a vicious that some vendors enjoy because it leads to continued consulting service sales. Typically packaged commerce solutions either: a) Handle the marketing/sales well but do not handle fulfillment/billing elegantly...(e.g. bundling is big in these sectors but not all vendors really handle this well). b) Handle the fulfillment/billing farely well but rarely handle the marketing/sales. c) Deployment. Vendors rarely take into account what it takes to get software deployed across a large organization.  For example, Microsoft and PUP files....argggg...what a mess. This is exasperated by the fact that many large companies are maintaining many environments.....Development, QA, User Acceptance Testing, Preview, Staging, name a few. Any pain points with deployment are therefore magnified. d) Cable and wireless companies are really a service company with some product. They are shipping bits/bytes along with some equipment and professional services (installation and maintenance). Furthermore products and services are intertwined. This means that the typical "Amazon" sales metaphor really does not hold. e) Cable companies are made up of numerous business units, often called franchises...they can almost be thought of as separate companies. They are typically selling the same basic products and services but with a twist (e.g. different movie channels...different bundles of service). They also must be able to react quickly to local competitive market conditions. The key here is that they are selling the same basic products and services....but they are marketed differently. So many vendors say....Hey just create another catalog item!. Huhhh. This means that if you are going to have a common commerce solution, you are going to have many dimensions to your data and/or multiple implementations....possibly down to the household. This also means that[...]

Fail Fast

Tue, 26 Apr 2005 13:34:00 GMT

I found this paper about "Failing Fast" while reading Paul Lockwood's blog. It refers to Martin Fowler's wiki about writing code to Fail Fast. This technique has been around for a while; however,  I honestly have always been more of a proponent of writing defensive code.  However, after writing defensive code for years and reading this paper I am starting to rethink my approach. I am not saying that I am doing away with defensive coding. I still believe that writing defensive code makes very robust code; however, there are times when the Fail Fast technique makes sense. For example, I wrote a piece of code that stores some common lookup information in a cache on the server. The base code looks like this: try{    _leaseDuration = Convert.ToInt32( ConfigurationSettings.AppSettings[ _leaseDurationLookupKey ]);    if(_leaseDuration==0)_leaseDuration = __defaultLeaseDuration;  <--( Defensive technique: Add this line of code)}catch{    // an error occurred. this means that the lease property is not    // configured properly in the configuration file. so set it to    // its defalt value.    _leaseDuration = __defaultLeaseDuration;  <-- This line of code would rarely get hit.} The issue is that Convert.ToInt32() will convert a null value to 0....and it is one of the reasons I typically advocate using Convert.ToXXX() methods over a simple cast. It is much more forgiving; however, it has a side-effect of masking potential issues. Following a Fail Fast technique I should opt to do a straight cast e.g.  _leaseDuration = (int)ConfigurationSettings.AppSettings[ _leaseDurationLookupKey ]; and then in my catch block, I should have thrown an exception as opposed to setting its default value. In other words: try{    _leaseDuration = Convert.ToInt32( ConfigurationSettings.AppSettings[ _leaseDurationLookupKey ]); <--(Fail Fast Technique)}catch{    throw new ConfigurationException ( "Could not lookup Lease Duration!!!" ); <--(Fail Fast Technique)} The question then arises, when to use which technique?. Some people would advocate  using 1 versus the other always. However, I believe you must always be pragmatic and that being a purist is a form of dogma and doesn't always lead to the best overall solution. If you think about it, Defensive coding and Fail Fast are not mutually exclusive. Defensive coding means that you are evaluating different execution paths and handling accordingly. However, a pure defensive technique might be to force the code down a different execution path if certain situations occur, a Fail Fast technique would be to throw your hands up and state "fix the problem". The question you then ask, when do you fail fast? Configuration Data. If you don't get it right, Fail Fast and let everyone know. Retrieving data or you come across a condition that if it occurs you really have no idea what to do. In other words, when writing defensive code and you come across a situation that could occur and that if it does occur you are at a complete loss of what to do. The answer...Fail Fast. Others, still considering this.... Just a few thoughts, -Mathew Nolton[...]

Too Funny. One of the best pranks ever.

Wed, 30 Mar 2005 19:50:00 GMT

A friend of mine who always seems to find pranks and funny minutia on the web came across this and sent it to me. It takes a few minutes to read it but well worth the time.

-Mathew Nolton

Be Careful Generating XML Documentation with .Net

Mon, 17 Jan 2005 20:50:00 GMT

If you are like me, you like to use the XML commenting feature that comes with .Net. Personally, I only turn this feature on when I am creating a release build. However, if you do this make sure that you turn-off the read-only property on the output xml file. Failure to do so can will cause your build to fail.

This happened to me recently with a rather large solution set and I could not for the life of me figure out why all of a sudden when I switched to release build my solution set failed to build properly. If it was just a single project solution or a small solution set, it might have appeared obvious. It was only after I did a 'clean' of my obj and dll files that I was able to see that it was unable to write the output xml file that the error became more pronounced in my output. I quickly turned off the read-only attribute and it finally built. I hit this problem a while ago...but my memory must be fleeting.

Note to self: When doing a release build make sure to turn off the read-only attributes on the generated xml file (I keep this file in source safe with the rest of my project files).

-Mathew C. Nolton

Too Funny. George Forman Grill meets WebServer Technology

Tue, 11 Jan 2005 20:48:00 GMT

A friend of mine sent this to me. It is too funny. The guy turned his George Forman grill into a WebServer. It takes a while to bring up the page. Apparently the George Forman Grill/WebServer is actually serving up the web pages.


-Mathew Nolton


Using the tag with web.config can be helpful.

Mon, 10 Jan 2005 16:35:00 GMT

Most people use the web.config file to define features or pieces of functionality to be used by their application. You may or may not know it, but there is a feature of the web.config file that enables you to remove definitions defined by a parent web.config file. This is especially important when an application (either rightly or wrongly) places a web.config file in the root web directory or your parent directory that adds functionality or puts a constraint on all child directories that you do not want or that will break your application. The web.config file has a heirarchy of sorts. All applications pick up their defaults from the machine.config file. Your application then uses any parent directory web.config file before using it's own web.config file. This means that if you are a couple levels deep and your parent directories have web.config files defined, you will pick up your defaults first from them before your own is processed. This heirarchy is helpful when carefully planned; however, if a parent directory makes assumptions that are not valid or the person deploying the application does not consider all avenues, it presents a problem. I ran across this situation when a commerce server implementation added some .net functionality and placed a web.config file in the root web directory. When I deployed my .net application I inherited all of its defaults....which would have broken my application. To get around this, I used the tag of the web.config. For example, the CommerceServer implementation defined a number of commerce server pieces that I did not want...nor could my application handle:                         <---- Most sections enable you to remove                                                                                                   <---- Most sections enable you to remove                                                                How good are you at spotting a fake smile?

Wed, 29 Dec 2004 14:28:00 GMT

I found this on the Internet. It tests whether or not you can spot a fake smile. I got 12 out of 20 correct.

-Mathew Nolton

WebServices are just an implementation of SOA

Wed, 15 Dec 2004 15:53:00 GMT

I am guilty. Too often I equate a WebService and its implementation with the overall definition of SOA. (For example here). In reality it is nothing more then an implementation. A good implementation, but just an implementation. Clemens Vasters gave a nice definition of the tenets of SOA called PEACE.

  • Policy-Based Behavior Negotiation
  • Explicitness of Boundaries
  • Autonomy
  • Contract Exchange
  • It is a nice acronym that attempts to summarize exactly what a service is. Much like ACID does for transations (Atomic, Consistent, Isolated and Durable...). There is some debate about this ACRONYM, but it is fair to say that most if not all agree with the 4 basic tenets...

    However, nothing in this definition speaks to platform independence. Does this mean that distributed system design and some of their implementations (e.g. think DCOM/CORBA/RMI/Remoting) and their platform dependence mean are OR are NOT SOA. They are. All these implementations do follow these tenets. I am not an RMI/CORBA expert but I did worked extensively with DCOM in a prior life. It did allow you to work in a more OO manner then a WebServices implementation. There were rules especially around transactions and how the state of your object could be affected, but you could do it. However, it's Contract followed a binary format that was not readily understood by other platforms. RMI had a similar flaw only working with the Java language.

    A WebService on the other hand is platform and language independent. It's use of Xml enables different platforms to communicate with one another. Additionally, a WebService is very procedural in nature. Some have argued that they are object based. Or that because methods are verbs they act as objects. Personally, I don't buy it...however, in the grand scheme of things this is more an argument of semantics.... Given that, there are also a number of rules about how client proxies are generated and this does effect how you construct your tiers. I discuss this here (same blog entry reference as above).

    In the end, I am guilty of confusing terms. A WebService is just an implementation of SOA. It is not the only implementation.

    -Mathew C. Nolton

    New Version of XmlPreCompiler Available for Download

    Tue, 14 Dec 2004 00:58:00 GMT

    I updated the XmlPrecompiler tool on my website. It is an enhancement to the version I posted in my blog a couple of weeks ago.

    It is available for free download. For those who haven't used it before, I put a user interface on top of a tool Chris Sells wrote called XmlSerializerPreCompiler.Exe and you can download it here:

    The tool solves the xml serialization problem when you get those nasty serialization messages that really don't tell you anything. Like "File or assembly name ctewkx4b.dll, or one of its dependencies, was not found". This tool checks to see if a type can be serialized by the XmlSerializer class and if it can't, it shows the compiler errors happening behind the scenes.

    The updates I made to the tool are as follows:

    1. If the application gets a ReflectionLoadTypeException, the application will spin through the inner LoaderExceptions in order to give a more detailed reason as to why a problem occurred.
    2. Added the ability to view assemblies referenced by the selected assembly.
    3. Sorting. Ward Bekker suggested sorting and it was a good idea. I also now display all types regardless. I used to only show classes. I also added a column that will display what it is (e.g. class, enum, interface, etc.) and I let you sort on this as well.

    Happy Programming,

    -Mathew Nolton

    Is Less Code Better?

    Thu, 09 Dec 2004 13:26:00 GMT

    The answer is, it depends. I have spent a large portion of my career developing products for a software vendor(s) in the ERP, B2B and B2C spaces. More recently I have been working as a consultant at large companies and I often here different managers suggest that less code is better and is a packaged solution better then a custom solution? Earlier in my career and while working at these software vendor(s) I had the pleasure of going on a number of sales calls and watched sales people work their magic. I've heard the statement "with a Customization of our product, you can do what you want". The questions every company should ask is or things they should consider are: Time to market is typically key to success for many companies. This means you must know your business and know exactly how the 3rd party software will grow with you. It does little for a company to quible over $X dollars in software costs if you can't get to market in a timely fashion. Nor make the changes that the business requests or that you foresee the business requesting in the near future. The question to ask, "Does the vendor truly understand my business?" If the answer is yes, then look deeper, it may be the right choice. If the answer is No or you are not sure, dig even deeper and come to a conclusion. It does little for you or your company if your time to market will be adversely affected. If the answer is No they do not understand my business, then writing less code means you cannot adapt to a changing competitive landscape. So either look for another vendor or consider doing it yourself. Not all vendor code is created equally. There is a lot of crap out there. Enough said. Less code is not always better code. I have seen people write less code that is terribly inefficient and error prone. Or write code that takes advantage of a piece of software that is error prone. I have also seen some developers write very efficient and performant code that has more lines of code but with less bugs. This means, know your architects and developers. A good one makes all the difference in the world in the success of any project whether customized packaged software or a custom solution. When buying a solution, determine exactly what customizations are required and can the consultants you hire who are familiar with the product do it in such a way that minimizes customizations and maximizes value. Keep in mind that for many companies who customize the solution spend at least $2-3 for every dollar spent on software. If the customizations are large, the number goes up considerably from their. Furthermore, the cost of upgrading to a new release also goes up considerably with every customization made. So, are you still writing less code? Remember follow-up service contracts are where vendors have large profit margins. This isn't a bad thing at all for you or the vendor, but keep that in mind when you are determining how much customization is required. Sometimes the customizations are so large that you spend so much time figuring out how to make the tool/product do what you want that you are truly better off writing it from scratch. Or leveraging a software factory or smaller piece of 3rd party code that provides the small piece of functionality you are looking to achieve. I have seen a number of software package implementations with so much customization that they might have well been written it from scratch.[...]

    Another List on SOA.....Ohh and SOA != OO

    Wed, 08 Dec 2004 23:45:00 GMT

    The term Service Oriented Architecture (e.g. SOA) is thrown around quite a bit these days and it always amazes me the speed in which people jump into them and the assumptions they make. There is no doubt that SOA adds a positive dimension to any application architecture. I have been building and deploying webservices for close to 2 years now. Before that I worked on a number of other service based architectures and there are things to consider before jumping into them. SOA is just one dimension of a good architectural scheme. It is not THE architecture.  In some respects (but not all) SOA and OO are orthoganal. OO is about keeping data and the methods and properties that act on the data in the same class. With an SOA architecture, client proxies are generated by the WSDL. Only public properties with getters and setters are created in the proxy. All methods and any business logic is NOT generated on the client nor included in the client proxy. This leads some to actually put the class that is being serialized onto the client as well (yes I have seen people do this/try to do this on a # of occassions) which can cause name collisions and really negates the positive aspects of SOA. If you want to put the class being serialized on the client, then you should take a serious look at Remoting. The fact that the client proxy only contains the class's getters/setters, leads many developing SOA architectures to seperate business logic classes from their business entity classes. The benefit of this seperation is a positive one but clashes with some OO principles because you are seperating data from the methods and Business Logic (BL) that act on the data. The BL that supports acting on the data is on the WebService machine and the class that gets serialized from the webservice is a seperate Business Entity object and the basis for the Client Proxy (via WSDL). This is not to say that you cannot follow a stricter OO model by keeping the two together but often times it causes confusion by users of your libraries who are unfamiliar with the way client proxies are generated on the client and expect the methods and business logic to show up on the client in the same proxy ...which does NOT occur. Personally, I recommend the seperation by keeping business entity classes and business logic classes seperate and knowing full well that I have broken the OO model to some degree...the benefit (IMHO) outweighs the costs. Remoting and SOA are not the same. Remoting follows the OO paradigm much more closely by providing a Marshal-By-Ref (or a MBV if you know what you are doing) proxy to the actual objects and its methods and properties. This means you can actually call the methods on the object itself via a client proxy and the business logic remains in tact. This follows the OO paradigm much more closely then a WebMethod because the client proxy is aware of the business methods and the business logic encapsulated in the object itself which isn't available with a WebMethod implementation. Furthermore, remoting assumes a .Net implementation which enables you to keep (and often requires) the class being proxied to exist on the client. Exception handling with an SOA architecture is different. You can still use the try/catch metaphor but all exceptions are of type SoapException (there is also SoapHeaderException but this means no detail block). This means that their is no differentiation in error typing. In other words, all exceptions emanating from a web service are of type SoapExcept[...]

    CodeMax-A component worth looking at.

    Mon, 06 Dec 2004 15:47:00 GMT

    About 3-4 years ago I wrote a tool called TemplateX that enabled the building of an application or a piece of an application using a template and an ASP syntax. I did much of this prior to .Net and it had much of the functionality plus some of what Eric Smith's CodeSmith does (but Eric did a much better job of marketing his tool and taking it to the next level with actually compiling .Net code. He also realized that you must charge little as well as provide a large library of templates to get people to use it..kudos Eric). At the time I needed an editor and I selected CodeMax. This is an awesome component and worthy of consideration for anyone doing syntax highlighting. It now costs $199 (it used to be free) but if anyone has actually ever considered writing the code for an editor that does syntax highlighting for a large document you understand the complexity and performance considerations you must take into account. It can be found here.

    -Mathew C. Nolton

    Top 10 Things to do on a software project (err 9 things...I got tired).

    Thu, 02 Dec 2004 13:53:00 GMT

    I have been a software professional for about 15 years and I have lead and/or architected a number of them. Here is a list of the top 10 things to do and/or avoid. Please add to the list if you are so inclined. Define requirements in a declarative fashion. Never give a dissertation about why. You can do this in a supporting document if required; however, the requirements themselves should concisely dictate what is "Required" (regardless of the methodology you follow). I recently had the pleasure of looking at a 77 page requirements document. The intentions were good but it was almost impossible to design a system from it. Requirements are not about implementation: Always keep implementation details out of your requirements document....enough said. Understand the business. Often times developers/programmers just want a spec and code to it. Although important, go to meetings with the business sponsors/customers and understand their needs. Doing so gives you insight into what they are trying to accomplish and better enables you to satisfy their needs and grow as a professional. Never underestimate what it takes to roll-out/implement a system. If you have a complex data-driven system, gathering data can take a large amount of time and must be accounted for in your project plan and conveyed to management. Underestimating this can kill a project. Always pick a small highly-competent team over a large army. Larger teams require more communication and the management of the communication is much higher as you add more people to your team. A friend of mine made the analogy of using the Navy Seals versus the Army (also consider brook's mythical-man month "Adding manpower to a late software project makes it later."). Sell your system. Whether you work in an IT department of a large corporation or a software vendor, selling your system is key to your success. Good software houses spend a large amount of their time and money selling their software and keeping their customer's happy. IT departments sometimes forget that they too must sell and evangelize their products (competing budgets, teams et. al). Provide vision. Selling your system requires a vision, a problem statement, the solution and the costs and benefits of the system. Also, most complex systems have a life-cycle that defines what it will look like in each stage of its development from infancy to maturation. If you follow best practices and define a tiered architecture, you can also provide a vision as to how a collection of libraries and services can grow to become the basis for many projects. Documentation is about communication: This shouldn't be a problem for most developers :). Many processes have a set of documentation to be completed. But often times there is also a lot of latitude in how deep you can go with your documentation. The problem is "when is enough enough". Years ago while working on my first UML project and I was trying to determine what this balance is and my VP said that documentation is about communication. You must convey what your team requires to get the job done and then support the system. Smaller highly competent teams means that many/all of the members can make the leap of logic to fill in any gaps. Don't bother documenting your entire API by hand, use documentation generators. C# provides the /// construct that enables the auto-generation of your API. There are also a number of tools out there such as NDoc and DocumentX that further refine documentation generation. (including your database). This isn't a substitut[...]