Subscribe: Lance's Whiteboard
http://weblogs.asp.net/lhunt/rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
assembly  color csharpcode  color  csharpcode  framework  gdd  net  new  project  site  sql  time  visual studio  web site  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Lance's Whiteboard

Lance's Whiteboard



Random scribbling about C#, Javascript, Web Development, Architecture, and anything else that pops into my mind.



 



My past year as a HTML5/Javascript developer… Part1

Mon, 26 Nov 2012 05:06:00 GMT

I’ve been fairly mum on work for the past year, but I figured I should do a brain-dump of some of the things I have learned and experienced as a HTML5/Javascript developer.

Before we begin…

(feel free to skip this section if you don’t care about my disclaimer or full disclosure)

I feel like I should start by explaining a bit of my perspective.  I’m not a traditional front-end web developer.  I come from a long history of server-side rendered web development primarily using Microsoft technologies such as ASP classic, ASP.NET, et al.

Having said that, I have spent over 15 years jockeying around HTML, images, css, and yes, lots of Javascript.   Back in the days when IE 4/5 won the browser wars, I was fairly successful building Javascript frameworks to power enterprise apps and extranet sites of varying sorts.   Not that javascript/html development was great (it was quite bad actually) but everything got much worse after that once Microsoft lost their way and then the browser-wars erupted anew.  During those bleak years I dabbled here and there with client-side dev but overall I was generally jaded towards treating JavaScript as nothing more than glue and spackling to smooth the cracks in my server-side rendering.  Frankly, before this past year I hadn't even really thought of Javascript as a first-class language.

So, now having built a fairly complex real-time stock trading app targeting only modern browsers using the latest HTML5 & Node features (and no Microsoft technology in sight), I want to break it down a bit and think about what I have seen & done.

State of HTML5

It amazes me every day how far browsers and client-side webdev have come since the late 90’s when I was involved more with startups.  The problem is that with new features and capabilities comes new compatibility problems. 

The good:

  • Enhanced Selectors
  • session/localStorage
  • Canvas
  • SVG
  • structural elements
  • CSS transformations
  • History navigation
  • WebSockets
  • WebWorkers
  • video element (codec support notwithstanding)
  • audio element (codec support notwithstanding)
  • Html Editing
  • MathML (a cautionary ‘good’ at this point)
  • WebFonts

The bad:

  • new form fields and validation
  • File API
  • Drag and Drop (bad mainly due to its mobile challenges)
  • Full Screen (mainly IE’s fault, but tablets also fail)
  • Server-sent events
  • Security sandboxing sub-apps
  • WebRTC
  • WebCam (very experimental in most browsers)

Sure, there are a handful of really great HTML5 technologies that are functionally equivalent across Mozilla, webkit, and IE10 but the devil is in the details.  If you are building fairly simplistic or single-purpose apps, you are fine, but as soon as you start trying to build something ‘interesting’ for a large audience you will quickly find that there are still important performance issues that arise. Add the shift to mobile and the plethora of new platforms such as Television and Gaming devices and the list of ‘good’ starts to get much shorter and muddled.

The holy grail of build-once-use-everywhere is still elusive with the need to adjust form factors for mobile devices and pair-down payloads to help meager bandwidth and processors.  Sure, the very latest tablets and phones are now quad-core or better with even more RAM but their performance are still network and IO bound for the most part.   This, plus the varying CPU performance leads to major issues with race-conditions and other fun device-profile issues.




The pitfalls of GDD.

Tue, 10 Aug 2010 23:18:00 GMT

Over the years, I have sampled approaches to software development ranging from RAD, XP, Waterfall, Agile, Scrum, SOA, TDD, and have recently started looking more seriously at the BDD/DDD(D) camps.   However, throughout my forays into this potpourri of acronyms and metaphors for programming, I continue to find myself falling back on the crutch of GDD – the least Agile and productive approach of all. Yes, I’m referring to none other than the ubiquitous Google-Driven Development (GDD).  Its like when I first realized how dependent upon Intellisense I had become, except now I find that GDD is far worse since it is simultaneously more subtle, insidious, and disruptive.    At least Intellisense tries (yet arguably fails) to help you get things done faster, but GDD despite its popularity and benign appearance is truly the greatest time-sucking vortex in the universe.  GDD dulls the developer’s mind, lulls us into a complacency about trying to solve problems ourselves – since by simply Googling, you can let others provide an answer for you.  GDD is like dark-matter obscuring developers from grokking quality software engineering.  It is the elusive Higgs-Boson particle of development that is driving us toward anti-productivity and mediocrity.  It is the reason why aspiring developers’ growth often stalls mid-career, becomes stunted and eventually causes them to revert to (MS-style) demoware quality development rather than maturing into true software engineers, craftsmen, and thought-leaders of the industry.  GDD is more subtle than the common-cold and a greater pandemic than H1N1, and it must be eradicated. [1] Inevitably if we continue to abuse GDD, we may one day be faced with a future similar to that depicted in the fictional (yet highly plausible) movie Idiocracy where our development communities are filled with below-average developers and hacks who are ruled by a few barely-average people and their well SEO implemented code repositories. Diagnosis I implore you to perform this self-diagnostic test today to see if you too have acquired the GDD addiction: Ban yourself from Google (and/or Bing) for 1 day…If you find that you struggle to produce code without searching for 3rd party libraries & open-source, notice a sense anxiety at being unable to find blog code samples, feel concerned that you cannot validate your ideas against posts in forums and sample apps, or cannot make coding progress without seeking out online API’s and reference sheets, then you too may suffer from Google Driven Development. [2] Remedy GDD is difficult to completely eradicate from our lives, however here is a proven 7-step  approach that helps to reduce its harmful affects; Blank Browser - Change your browser start-page to about:blank (or equivalent) rather than a search page. Cleansing Period - Perform a 1 week cleansing period of total search abstinence. Moderation - Afterward, slowly reintroduce Google and other search tools with extreme moderation. Reward Abstinence - Reward yourself each time you successfully complete a task without search that normally you would have. (note: don't use GDD as a reward for GDD abstinence) Cheat Hour - Schedule one timed GDD Cheat Hour each week where you allow yourself to indulge in unadulterated hard-core GDD.  (Note: make sure the 1 hour isn't exceeded) GDD Diary - Keep a log of how much time you use search tools for development. (Tools like RescueTime.com may help) GDD Monitoring - After 21 days of intense anti-GDD focus & moderation, open up your calendar and schedule 1-2 days of GDD abstinence each month to measure your progress.   If abstinence of GDD still causes excessive anxiety, repeat these 7-steps. Note: beware that its common to see GDD sufferers seek-out alternatives, or shift habits towards Twitter, StackOverflow, and other social networking sites.  These are just variants on GDD, each with its own set of problems, thus should be [...]



T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 project.

Wed, 05 May 2010 04:26:50 GMT

I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Le[...]



Seadragon & Deep Zoom

Fri, 21 Nov 2008 23:31:00 GMT

I stumbled upon this today and definitely want to play with this further when I have time....

 

(image)

SeaDragon Ajax

http://livelabs.com/seadragon/ 

"The aim of Seadragon is nothing less than to change the way we use screens, from wall-sized displays all the way down to cell phones,

so that graphics and photos are smoothly browsed,  regardless of the amount of data or the bandwidth of the network."

 

Deep Zoom Composer

http://www.microsoft.com/downloads/details.aspx?familyid=457b17b7-52bf-4bda-87a3-fa8a4673f8bf&displaylang=en

"...a tool to allow the preparation of images for use with the Deep Zoom feature currently being previewed in Silverlight 2. The new Deep Zoom technology in Silverlight allows users to see images on the Web like they never have before. The smooth in-place zooming and panning that Deep Zoom allows is a true advancement and raises the bar on what image viewing should be. High resolution images need to be prepared for use with Deep Zoom and this tool allows the user to create Deep Zoom composition files that control the zooming experience and then export all the necessary files for deployment with Silverlight 2."




FW: Batch Updates and Deletes with LINQ to SQL

Mon, 23 Jun 2008 23:07:00 GMT

I'm currently on a project creating a proprietary data-migration tool using C# & Linq.  I'm still new to Linq, but quickly discovered the challenges of doing mass-updates and deletes with Linq.

Specifically, by default Linq to Sql generates a sql statement for each row you are updating.  There is no built-in way to do large batch-updates or deletes without dropping to custom SQL.  After a quick search, I found this great article and sample code by Terry Aney on Batch Updates and Deletes with LINQ to SQL.

It offers solutions to many of the basic problems with some elegant extension methods so you can do things like:

   1:  be_Posts.UpdateBatch( first10, new { Author = "Chris Cavanagh" } );

   2:   

   3:  and 

   4:   

   5:  var posts = from p in be_Posts select p;   

   6:   

   7:  bePosts.UpdateBatch( posts, p => new bePosts { DateModified = p.DateCreated, 

   8:                                                   Author = "Chris Cavanagh" } );

Cool stuff!




Minimum & Maximum Dates in code

Thu, 19 Jun 2008 16:35:56 GMT

When updating Sql columns that need a minimum or maximum date, consider using the defaults from the System.Data.SqlType namespace:

   1:  DateTime minDate = SqlDateTime.MinValue.Value

   2:   
   3:  // and
   4:   
   5:  DateTime maxDate = SqlDateTime.MaxValue.Value

This can be a lot safer than putting hard-coded "magic date" constants in your code.




What we dont know "will" hurt us...

Thu, 05 Jun 2008 17:17:53 GMT

I like this article by Nathan Henkel, its essentially about assessing risk and scope of projects and strikes me as a simple truth about the uncertainties you encounter in every project:

Information about any project can be divided into four categories:

1. Things we know (and know we know)
2. Things we know we don't know
3. Things we think we know, but don't (i.e. things we're wrong about)
4. Things we don't know we don't know

Obviously, if you were to try to actually figure out where everything falls, you would put everything into 1 or 2. Everything that should be in 3, you would put in 1 (you're not going to have known mistakes in your information), and everything that should be in 4 would simply be missing.


However, without dealing with specific items, I do think that it's possible to guess at how much "stuff" goes in each category. You can take into account your history ("I tend to often be mistaken about X"), or a general feeling of ignorance ("I've never used framework Y before") to guess how much goes in each category.

http://simplyagile.blogspot.com/2007/10/classifying-information-or-what-we-know.html

Sometimes, I think we get so wrapped up with what we “know” about a project that we fail to quantify what we don’t know, or the degree of certainty to which we actually know what we think we know.  As with solving any problem, the first step is to find a way to quantify and measure uncertainty and risk in order to minimize it. 

If you track this measurement over time, it should also help your estimation and planning on future projects.

Good stuff!




Argotic Syndication Framework 2008 released

Thu, 27 Mar 2008 16:59:12 GMT

I got an email yesterday that a major update to the Argotic Syndication Framework was released.   I have used the older versions of this framework several times for projects that need basic RSS & Atom parsing/generating so I'm looking forward to digging-in to the new release.

If you are not familiar with it, here is a quick blurb:

The Argotic Syndication Framework is a Microsoft .NET class library framework that enables developers to easily consume and/or generate syndicated content from within their own applications. The framework makes the reading and writing syndicated content in common formats such as RSS, Atom, OPML, APML, BlogML, and RSD very easy while still remaining extensible enough to support common/custom extensions to the syndication publishing formats. The framework includes out-of-the-box implementations of 19 of the most commonly used syndication extensions, network clients for sending and receiving peer-to-peer notification protocol messages; as well as HTTP handlers and controls that provide rich syndication functionality to ASP.NET developers.

To learn more about the capabilities of this powerful and extensible .NET web content syndication framework and download the latest release, visit the project web site at http://www.codeplex.com/argotic.

Also, here are some of the new features in this release:

a) Targeting of both the .NET 2.0 and .NET 3.5 platforms

b) Implementation of the APML 0.6 specification

c) Implementation of the BlogML 2.0 specification

d) Native support of the Microsoft FeedSync 1.0 syndication extension

e) Simplified programming API and better online/offline examples

Brian has done an amazing job on this project from the start.  I had intended (and still hope) to jump in and contribute some of my own work, so its great to see how far it has evolved from its first releases.

If you work with RSS, ATOM, or any other syndication format/protocol, you should definitely take a look at this framework for your next project.




I love ClearContext!!

Tue, 18 Dec 2007 19:40:04 GMT

After several months of using the Free version of the ClearContext addon for Microsoft Outlook, I just cant imagine what I would do without it.  It has reduced my email time, kept me more organized, and uncluttered my Inbox better & faster than any ad-hoc system I have devised in the past.

As a developer, I hate it when I have to "code in Outlook".  If it were up to me, I would ban all email during a project and deal with all communication via instant messenging, Scrum meetings, and whiteboards, but the truth is that email is a neccessary evil especially as a Tech Lead who needs to interface with the Project Manager, Customer, and IT personnel.

Enter ClearContext Information Management System...

First, I set it up to flag emails from my bosses in Red, so I dont miss them.  Plus, for good measure, I have an Outlook rule that sets a FollowUp flag to make sure I dont overlook them.  Also, ClearContext automagically ranks emails based upon my prior history with this person, so I know what to do when I get some nice blue and green colored mail too.

If I receive an email relating to my current project, I simply hit ALT-P to popup the CC dialog and flag it with the topic "projects/MyProject" then either leave it in the inbox for further review, or hit ALT-M to file the message for future reference.    Accordingly, if I receive some corporate or administrative relating email, then I assign it's topic appropriately and file the message to send it to its respective holding area.  

The act of assigning a Topic (ALT-P), automatically creates subfolders within my Inbox (e.g.  inbox/projects/MyProject) matching the topic name (Note the trick of adding a "/" to the topic name to create a nested subfolder at the same time).  The act of filing a message (ALT-M), moves it to the subfolder identified by the topic name.  This is great because the messages are nolonger visible in the Inbox listing, but are still within the Inbox via the subfolder.

At that point, my AutoArchive settings will take care of moving it off on a monthly basis in case I need it later.

At some point, I want to look at the full product, which has features for deferring emails, converting them to tasks & appointments, assigning them to other people, etc.   See their section for more on these areas.

If these features are nearly as useful as the ones I use now, then I could gasp become even more productive!  woot!




Manual CRUD operations with the Telerik RadGrid control

Wed, 17 Oct 2007 15:33:00 GMT

I have been working on a project lately that was already using the Telerik ASP.NET Rad Controls suite.  One of the new features was a fully editable web-grid, so I chose to use the existing ajax-enabled RadGrid control to speed my development.  I chose to use a 3rd party control, mostly due to time constraints since the project required a grid with inline-editing, full CRUD operations, plus custom column templates, all with heavy Ajax support to avoid postbacks and excessive page size.

I soon discovered, the Telerik controls are nice tool for simple uses where you can use asp.net DataSource controls and automatic databinding, but not so much if you need to get "fancy" with your implementation.  In my case I needed to do 2 things that cross over into the grey area where these controls excel.

First, I'm using an early 2.0 version of NetTiers for the DAL (with Service Layer implementation) with custom mods to the entities as the datasource,  and second, I'm doing some aggregate custom ItemTemplates that require custom data-binding.

This lead to extreme complexity in the implementation because, A) this version of NetTiers' had problems with properly generating CRUD operations for its EntityDataSource controls (NetTiers entities mapped onto a custom ObjectDataSource style control) which prevented me from using the declarative model, and B) the RadGrid control simply sucks if you cannot use automatic databinding and if you require custom databinding logic.

It would be great if I could upgrade NetTiers and/or Teleriki RadControls to the latest versions, but it wasnt possible in this situation, nor is it likely that this would have solved my problems.

Anyhow, all this discussion is basically just to share you this one link to a user-contributed example I found incredibly useful after 3 days of searching their forums, demos, and 3rd party blogs.   This example shows how to manually implement Insert/Update/Delete functionality within the RadGrid control by handling the events OnNeedDataSourceOnItemCommand, OnInsertCommand, OnUpdateCommand, and OnDeleteCommand:

http://www.telerik.com/community/code-library/subm...

The reason this link is important is because the Telerik website, with all of its dozens of examples, consistently shows very basic scenarios, even in samples labeled "advanced".  Also, not all of the API features are fully or well documented to help you figure this out on your own.

Hopefully this simple link (which should be promoted to Telerik's demos/samples page) will help someone else as much as it did me.




VPC 2007 Dual Monitor support

Thu, 11 Oct 2007 17:11:00 GMT

I have been trying to find a way to allow you to run Virtual PC 2007 with multiple monitors.  Natively VPC 2007 doesnt support more than 1 monitor, however you can "trick" it by using various techniques that expand the desktop area into a larger virtual desktop. I tried using the awesome MaxiVista tool which can extend your screen across separate PC's (think "push" remote desktop), but the new multi-monitor compatibility feature of VPC 2007 (which inexplicably does not add multi-monitor support) made this difficult since it ensures that your desktop recaptures your mouse when you move it outside of the VPC window thus preventing the extended screen from being accessible. So, instead I tried the Remote Desktop approach mentioned in Steven Harman's blog post.   Here is a quick rundown on how it works: Connect 2 monitors to your PC (more than 2 typically don't work with this approach).   Make sure to extend your desktop onto the 2nd screen via Display Properties -> Settings.  Then launch Remote Desktop (mstsc.exe) with the "/span" flag: mstsc /span Then just use Remote Desktop as usual by specifying your VPC's computer name in the connection dialog. When I first tried this, it still didnt work exactly right.  It kept giving me annoying scrollbars instead of going full screen, so I added this extra flag to force it into fullscreen: mstsc /span /f Also, since I didnt want VPC to have the extra overhead of maintaining 2 sessions (the console and my new RDP session), I threw in one more flag to make it simply take-over the initial console window: mstsc /span /f /console NOTE: The /span flag is only present on the very latest version of Remote Desktop Connection.  Therefore, you must either be running Vista on both PC's, or install the update specified here: http://support.microsoft.com/kb/925876 There are limitations on how your  monitors must be configured in order for this flag to work. Also, keep in mind that this technique only enlarges your desktop area sufficiently large enouch to span both monitors, but it DOES NOT behave exactly like the native dual-monitor support you may be accustomed to.  For example, when you maximize a window, it maximizes across BOTH monitors instead of maximizing within the confines of a single monitor.   For now, I'm just dealing with that by avoiding maximizing and just manually resizing windows to fit 1 screen.   Advanced Users: One way to avoid having to arrange windows each time is to use a cryptic, yet incredible tool called Hawkeye ShellInit. ShellInit is a small application that helps you manipulate your desktop & application windows via script.   Here is a small script that will move Visual Studio over the right-hand screen (assuming 1280x1024 resolution) and enlarge it to the correct size. Position Window, *Microsoft Visual Studio, wndclass_desked_gsk, 1280, 0, 1288, 1002 If you decide to use this tool, make sure and read the readme.txt file for some good sample scripts and ideas. [...]



Link Love: 09/21/2007

Fri, 21 Sep 2007 20:47:00 GMT

I havent been blogging much over the past several months.  The main reason is time, or the lack thereof.  Since I dont have time to write a "proper" blog post, I'm just going to start sharing some link love...

 

Here are a few interesting links I have spent time perusing today:




Note to self: Blog about using Service Broker

Thu, 14 Jun 2007 18:31:00 GMT

Just a note to myself to do a braindump on all this Service Broker shiznit I have been playing with lately.

Potential discussion topics:

  • MessageTypes, Contracts, Queues, and Services.
  • Internal Activation, Routing, & External Activation
  • Using the Sql Server ServiceBroker sample library.
  • Implementation using SqlClr vs. TSQL
  • Developing via messages instead of procedures...
  • Compare & contrast Service Broker vs. Workflow Foundation vs. BizTalk
  • The nifty Sql Service Broker Admin tool (3rd-party)
  • Practical examples:
    • Async "fire-and-forget" stored procedure invocation
    • Query Notification for cache invalidation
    • PubSub




The influence of style upon methodology...

Thu, 24 May 2007 00:53:00 GMT

No matter how faithfully you try to follow your chosen project methodology (Scrum, Extreme Programming, Waterfall, CMMI, etc.) ultimately the strengths, weaknesses, successes, and failures you experience are determined by the habits, attitude, and style of the project manager and team members on the project. 

How is communication conducted? Meetings, hallway, bullpen, email, IM?

How do you react to change?  How well do you manage scope?

How much trust/distrust is there amongst team members?

How rigorously or adaptively do you apply your process to each project?

Do you micromanage or do you empower?

This is the crux of why so many people disagree with most definitions of exactly what Agile is.  This is why some people fail with one methodology while others succeed, and some unique individuals actually find great success with seemingly outdated methodologies like Waterfall.   Much like with any good pasta, its not the ingredients in the sauce, its the sauce-maker(s).

I guarantee you that even the most rigorous Agile shop will see great variance (good and bad) between projects merely due to the different personalities of the project managers who manage each project.  This is the human factor of software development that can never completely be erased.  Your best hope is to try to control, monitor, and compensate.

There are plenty of excellent books to help you define your rules-of-engagement, develop good habits, and provide checks-and-balances throughout your project.  However, ultimately experience is our best teacher for what habits, styles, and attitudes result in the most successful projects.  Of course, to complicate things further, these same success factors may change from project to project.

In my opinion, good project management is like any other art form - everyone has different tastes, everyone does it at least a little different, and deep down you just hope that you have the right combination of experience, talent, and style to be able to execute on your vision.  I'm just curious about how many other styles of project management are out there, and which ones people find the most successful.

What daily personal work habits do you find the most crucial for project team members?

What project management styles have you seen that were the most successful?  Least successful?

Does your organization try to limit the impact of personal style, or embrace it?




My fair and biased opinion on the recent upgrade...

Fri, 18 May 2007 19:07:00 GMT

DISCLAIMER:  The following post represents my personal opinions and thoughts, not that of my employer.   As a Telligent employee and weblogs.asp.net blogger, I hate the fact that the recent upgrade caused problems for the weblogs.asp.net community and that it has affected the perception of Telligent as its steward.   I wasnt one of the developers who worked on the project, but I couldnt help but feel a twinge of pain each time I read a post (often rightfully) slamming the team and company who delivered this upgrade. Long before I joined Telligent, I was one of the more vocal bloggers about the mishandling of the weblogs.asp.net site.   The great irony is that most of my arguments at that time were about how rarely the site was updated to newer versions of .TEXT and CS.   Since that time, its hard to argue that it hasnt gotten much better.  Eventually I became satisfied and frequently impressed with the website changes as we finally were able to take advantage of Community Server's outstanding features.  This latest release really excited me because it was very timely (coming on the heals of CS 2007's release) and had the potential to allow me to finally have the level of control I wanted over my blog's presentation. Even with the recent problems, especially the fact that I need to resubmit my javascript, I'm still excited about this release.   In fact, I truly was never all that disturbed by the problems as they were occurring Was I surprised?  Yes.  Was it annoying? Sure.  Would I have preferred an email beforehand?  Absolutely. I thought about my lack of concern, and had to ask myself why I wasnt more put-off by the problems of this update.  Was it just because I am now an employee?  Was my blog nolonger as important to me as before? The answer, is quite simple.  After coming to work for Telligent, I have met (most of) the people who work on Community Server and the Weblogs.ASP.net websites.  Every one of them are top developers who are professional and thoughtful of every change they make and how it will affect the community.   They have the best intentions of delivering new innovations to the community as cleanly and effectively as possible. In short, I trust that the people in this company will always try their best to do the right thing for our customers, users, and community.   If you knew the people here, and saw their passion, attention to detail, and work ethic, I have no doubt you would feel the same. Do we always succeed 100%?  No. Do we WANT to succeed 100%?  Absolutely. Will we continue to try and improve our processes to avoid such problems in the future?  Undoubtedly. As the philosophy goes; "if you arent making any mistakes, then you arent trying hard enough".   I dont know how/why things went awry this time, nor am I in a position to investigate it since I'm just a developer here.  I just trust the weblogs.asp.net team and my company to do the right thing.  They always have in the past, so I see no reason to doubt it in this situation. I just wanted to share this view with my fellow bloggers.  You can obviously choose to say "Lance has sold-out and is trying to back-up his employer" but I the truth is that I really believe what I said above.  Nobody asked me to write this. In fact, I'm sure there might be some here who would prefer that I didnt even post this... In the long term, I belive this will be a better site and a better community after this upgrade is stabilized and people realize how major this upgrade was, and how&nb[...]



A web site is not an RSS feed...nor the reverse.

Mon, 14 May 2007 19:22:00 GMT

There was a time not so long ago when we built "home pages".  Glorviously extravagant, naievely simple web sites that said who we were, and what we were about.   On those home pages, we put news & announcements, and often, links to static pages of content.   If we wanted to interact with visitors, we included guest books, maybe a simple message board, or just displayed our email address prominantly so others could drop us a note.    All of this was created by hand with the expectation that changes would be few and far between.

Eventually this became such a common approach for building a web site, that we tried to standardize these things.  At the same time, we discovered that a frequently updated web site received more visitors than one that was static or rarely updated.  As a result, content-management features were added to speed updates, forums were improved to include user avatars, threading, and email subscriptions.  Finally, the news & announcements became data-driven, annotated with metadata, and archived for historical review.

The current incarnation of this evoulution is what we call the Weblog.   A weblog is still nothing more than an "about" page, news, articles, and forums, it just has evolved a few new facets and appendages to impower users to interact in new (and hopefully better) ways.   Today, we can hardly imagine a web site without at least 1 RSS Feed.  In fact, most web sites today (nearly) completely revolve around their News & Announcements and the related RSS feed.  Yet, we must remind ourselves that the RSS feed is useless by itself.  It is an evolution FROM a web site, not an evolution OF a web site.  It is nothing more than an alternative delivery vehicle for information, not neccessarily a replacement for the weblog (read web site).

However, too many blogs today look like nothing more than an RSS Feed transformed via XSLT.  In fact, I bet quite a few are exactly that.   As a result, they look less like a home page, and more like a laundry-list.  Yeah, most sites have a banner and some rudimentary navigation, but I'm talking about what happens below the first 6 inches of the site.

Which leads me back to the title of this article...

A Web Site is supposed to be the virtual equivilent of a store-front window or your home's front lawn.  A weblog is supposed to be like the entryway, and an RSS feed is supposed to add value to an already valueable web site, much like putting a sign outside the store to advertise your current specials.  An RSS Feed is not supposed to be your entire web site, nor is your web site supposed to be turned directly into an RSS feed.   Sure there are some nifty Web 2.0 uses for RSS, but dont get so busy focusing on your RSS Feed that you forget about why its there in the first place - to drive visitors toward your web site.

RSS is a just tool, and like any other tool it can be abused, and over used.




Reporting Services administration changes in Katmai (v.Next)

Tue, 24 Apr 2007 17:02:23 GMT

some information on changes they are consindering to how you will administer Sql Server Reporting Services in the next version, codenamed Katmai.

Right now, administering Report Models exposed to Report Builder requires you to launch Sql Server Management Studio tool, while other features require you to launch the Report Manager website.   Also, there are some features that you rarely use, yet are exposed from the Report Manager portal, such as Job Management and System Wide Role & Security configuration.  

It appears that the end result of the proposed tool changes will be to correct these inconsistencies by consolidating server and system-wide configuration and administration tasks into Sql Server Management Studio, and moving some of the more user-facing admin features to the Report Manager.

Not a bad idea overall, now I just hope they fix support for FormsAuth throughout the entire solution (ReportBuilder, nudge nudge).




Reserve judgement lest thou be judged too...

Tue, 24 Apr 2007 00:34:00 GMT

After submiting my last post (rant), I re-read it and had a thought occur.  Maybe I too should show humility and take the optimistic view that the developers of these projects really did have good reasons for their reinventions and innovations.

Perhaps somewhere in this world is a developer looking at old code that I wrote, saying "WTF!?!?".   I'm sure I too had a reason...




Embrace the framework!

Tue, 24 Apr 2007 00:14:00 GMT

House of Babel

Sometimes I get involved with a .NET project where I find code that is just so "different" from the norm that I wonder how it got to such an extreme state;  Code that reinvents Configuration, Threading, File IO, DataAccess, or sometimes even primitive data types.   It strikes me as a form of extreme arrogance that a developer could find even the most fundamental building blocks incapable of being used.  In the end, I find myself babeling to myself incoherently as I try to trace, diagram, and divine the secrets of these "innovative" implementations.

Neccessary Abstraction?

Take the Mono project for example.   Eventhough its intent is to create a parellel framework capeable of hosting .NET applications, the underlying API implementation often deviates to a very large degree from the Microsoft implementation.    This isnt surprising, since most of this work is to support abstracting away all Windows platform details (GDI to GTK++, et al) so that multiple platforms may be supported.  This type of abstraction is inevitable and required;  No arrogance, its simply abstraction justified by neccessity (for the most part).

However, the projects I refer to above are typically not targeting multiple platforms nor appear to need such extreme abstraction.  In many cases, its just abstraction for the sake of abstraction.   In others, its an overzealous attempt to solve a problem that could have been overcome simply by overriding existing behavior, or plugging-in your own custom modules.

Why do people insist upon doing this?

I'm at a total loss for words. 

Maybe I'm just missing out on the subtle joy of NIH syndrome...

A Pragmatic Philosophy

My philosophy towards development is simple; Embrace the framework, question everything, adopt the good, extend any limits, reject the evil, and finally consider reinventing or reimplmenting a new framework only when all other methods have failed and its not possible to do anything else.  

Even in the end, when all else fails, you shouldnt neccessarily reinvent the framework yourself since there are a plethora of solutions (often well tested and open source) to nigh every problem if you go visit Google for a few minutes.   Throughout your analysis you should take on an attitude of humility; assuming first and foremost that it is your own lack of experience and understanding that prevents the framework from functioning, not an inherent design flaw.     Only once you can prove a limitation or failure via code should you consider rejecting the framework.




Find your Sql Server Version and Edition

Tue, 17 Apr 2007 01:49:00 GMT

When working with Sql Reporting Services or other features that are version-specific, I often need to know what version of Sql Server is running on the target server.

For Sql 2000 and newer, you can simply query:

SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition')

For Sql 7.0 and earlier, you can query:

SELECT @@VERSION

If the results arent obvious, you can see this KB article for details on what each result means:
http://support.microsoft.com/kb/321185