Subscribe: Jeff Perrin - Sexier Than You Are
http://devlicio.us/blogs/jeff_perrin/rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
client  code  developers  good  java  list  make  new  price  product  project  public  return  specification  thing  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Jeff Perrin - Sexier Than You Are

Jeff Perrin - Sexier Than You Are





 



Development Methodology "Renovation"

Sun, 01 Nov 2009 03:18:00 GMT

I was watching Holmes on Homes today and got to thinking about parallels (or lack of them) between renovating and building a home vs developing a software application. Some of what I was thinking ties in with the "software development as craftsmanship" discussion that pops up every now and then, but mainly this is just a brain dump. First off, let's set this up. What I'm thinking is that a "software application" maps directly to a "home renovation project" or just actually building a new home. Here are some of the obvious parallels: In a home building project you have a general contractor who oversees the entire operation, hiring other contractors or bringing in employees as needed. Software projects will generally have some sort of equivalent "lead/architect" role. Both projects would have clients that most likely have no idea what's going on. They see the end result and have no way of knowing whether that result will last 2 months or 20 years. Generally, as long as the finished product looks pretty and seems to do what's required the developers/renovators will get paid. Size matters in both professions. Anybody can hack their way through installing new countertops or building a simple CRUD web app, but the bigger and more complex the problem gets, the more professionalism and care is required. I'll take a calculated leap and say that many of the problem projects in both professions happen because somebody who can "install a new faucet" gets it in their heads that they can also "renovate the entire basement" using the same amount of knowledge and preparation. I'm sure there's a few more parallels, but you get the idea. Now let's focus on some differences... A general contractor doing a home renovation will bring in "specialists" to work on individual pieces of the project, whereas many software projects are made up of "generalists". As programmers, we are often tasked with developing business logic, writing html, security, usability, system administration, deployment, testing, etc. Just look at the range of what's required in the average job posting for a software developer. This is obviously not true in all cases, but in general this is what I think is happening out there. The home building industry has these crazy things called "inspections" that are supposed to ensure the builders don't pull some crazy shit that could endanger the client. Imagine code inspections (by an honest inspector we hope) and how they could protect a client... An inspector could obviously not be expected to know if the application is running a calculation correctly, but that is the one thing that a client can confirm. An inspector could, however, check for signs of incompetence such as SQL injection vulnerabilities, obvious spaghetti code, code duplication, lack of tests, poorly designed databases, mangled HTML, obvious usability disasters and so on. Once notified of the issues, the client would be able to take action as they see fit. My thinking is that, at least in my own experience, the best project I've worked on (in terms of happiness of the client and its likelihood of being maintainable for years to come) was the project that was most similar to a successful home reno project. We had a UI specialist, a bunch of developers focused mainly on the business logic, and testers. Other "specialists" were brought in as needed for database tuning and system admin type work. An "inspector" even came to the project at one stage. This inspector was Eric Evans, who showed up and observed the project for a week and delivered a report on what was good and what could be improved. This visit seemed to be largely positive (I showed up a few months after he came so I can't know for sure) in the sense that it gave the development team some reassurance and some focus on what could be improved. I'm not sure what the clients got out of it. I'm wondering if a working on a new project with t[...]



Just Be Honest and Tell the Truth

Thu, 25 Jun 2009 03:27:00 GMT

I just read Ron Jefferies latest post entitled My Named Cloud is Better Than Your Named Cloud and it got me riled up enough to post something I've been meaning to write about for at least a couple of years. His post touches on the point I'd wanted to make, but doesn't quite say it as simply as I think it can be said. Here's what I'm thinking: If we could just always be honest with ourselves, as people, team members and organizations, software development wouldn't be that hard. There. Simple. Think about it for a second. All this stuff that we label and group together under methodologies and processes is really there so that we can do One Thing. This one thing, I'm assuming, is usually to create software that fulfills a need. Let's pretend that a software team has been assembled to create an application that fulfills such a need. They've chosen to use the XP methodology while writing this application. I'll go through some of the tenets of XP and run them through my honesty filter: Pair programming: The team realizes that people will come and go on a project for good (quit, fired, etc), or for short periods of time (maternity leave, vacation, illness, etc). They want to minimize the risk, so they make sure everyone is familiar with every piece of the code base. They also realize that sometimes even the best developers do stupid shit for no good reason, and believe that having two sets of eyes on the screen at all times will lessen the likely hood of this happening. TDD: The team realizes that writing software is hard, and that stuff that worked last week will have to be changed this week. Since they want to ensure that stuff they worked on earlier still works when they make these changes, they write tests, and ensure that no code gets checked in unless all the tests are passing. They also hope that these tests provide some form of documentation to any developers (and perhaps users/clients) that may come later or who never worked on the feature originally. Incremental design: The team realizes that there are unforeseen forces that may threaten a project at any given time. A big project entails a large amount of risk for both clients and developers. If features can be developed in incremental fashion, hopefully in an order representing the importance of each feature, then risks can be mitigated. The team is always working with a usable application, so that if something comes up that stops the project, at least the client will have something to work with. Now obviously there is more to XP than what I've listed. The point is that those three things exist to handle a need that most teams, if they are really honest with themselves have: Knowledge transfer. Nobody wants to have to rely on one person to get something done. Sooner or later, this always bites us in the ass. Pairing is one way to handle this. Developed features continue working. Nothing is more frustrating (to users and developers) than having something that used to work perfectly stop working. Testing is one way to handle this. The need to guarantee that something will come out of all the money we're spending. What happens if the development shop you've contracted goes bankrupt before they've finished your application? Incremental design is one way to handle this. Listen. We have methodologies and processes for a reason (I hope). Some of these processes may work for you. However, maybe only some parts of a process work for you. The point is, it's not about the process. If you can understand why you're using a particular piece of a process, you can assess whether it's useful for your team. Who cares whether you're doing XP, Scrum, Lean, Kanban, Waterfall, or Hack 'n' Slash. Those are just names. Identify your pain points, problems, worries, etc, and try to fix them. Honestly.[...]



Self Conscious Development

Sat, 05 Apr 2008 05:23:00 GMT

Write code as if you care what others think about what you’ve written.
(image)



What I Learned From X That Makes Me a Better Programmer in Y

Thu, 04 Oct 2007 06:30:37 GMT

Reginald Braithwaite says he'd love to hear stories about how programmers learned concepts from one language that made them better in another. This pretty neatly coincides with a post I've been meaning to make for months, so I might as well just get on with it and write something (because as CHart reminded me, I haven't even posted for months). Sometime around late 2004 - early 2005 I heard about Ruby on Rails for the first time. I'd never really programmed in any languages but Java/C#/PHP before, but I'd read posts by guys like Sam Ruby and Martin Fowler about how Ruby the language was really expressive and compact. However, it wasn't until Rails started getting some buzz that I really looked at any Ruby code and tried to decipher what it was doing. Rails put Ruby within a frame of reference that I was very familiar with (web development) allowing me to easily contrast the "Ruby Way" with the .NET/Java way I was familiar with. The first thing that really caught my eye was the extensive usage of blocks, or anonymous methods. Coming from Java/C#, I had a hard time deciphering what was really going on when I saw something like this in Ruby code: list.find_all{ |item| item.is_interesting } It was pretty easy to see what the end result should be, but how does it actually work? All I knew was that a simple one liner in Ruby seemed to balloon into this in Java: List interesting = new ArrayList(); for(Item item : items){ if(item.isInteresting()){ interesting.add(item); } } Sometime later, a pattern was introduced into the Java project I'm currently working on by another developer. This pattern seemed to accomplish roughly the same thing as the Ruby example (conceptually, there was still a lot of code in the Java version). new Finder(list, new InterestingItemSpecification()).find(); Astute readers might recognize this as a variation on the Specification Pattern I'd written about almost a year ago. The point of this pattern is to allow the developer to specify how to filter a list of items, rather than manually iterating over the list by themselves. Never mind the fact that doing this in Java requires as many lines as the standard for-loop example... It's the concept of telling the list what you want, rather than looping through manually to take what you want that's interesting here. I eventually created a sub-class of Java's ArrayList that allowed it to be filtered directly, just like Ruby arrays and C#'s generic list class. Now the code ended up looking like this: list.Where(new InterestingItemSpecification()); Once I got this far, things really started to fall into place. I started to see duplication everywhere. Hundreds of methods (it's a pretty large project) that selected slightly different things from the same lists, the only difference lying in a little if clause. I started deleting entire methods and replacing them with Specifications. Booyah. Then I started seeing other patterns. Accumulation/Reduction/Folding: public BigDecimal getTotal(){ BigDecimal total = BigDecimal.Zero; for(Item item : getItems()){ total = total.add(item.getSubTotal()); } return total; } Mapping/Conversion public List getConvertedList(){ List converted = new ArrayList(); for(Item item : items){ converted.add(item.getAnotherObject()); } return converted; } Applying actions/commands to each item public void calculate(){ for(Item item : items){ item.calculate(); } } For each of these common informal patterns I was able to create a formal method for accomplishing the same thing. The goal became to distill each method down to just the part that made it different from another method. The act of iterating a list is boring, boilerplate noise that just doesn't have to be there. Here's the end result: Accumulate list.reduce(new SumItemCommand()); Map list.convert(new ItemToThingConverter()); Actions list.forEach(new DoSomethingToIt[...]



Arguments You’ll Almost Never Hear

Thu, 07 Jun 2007 06:35:45 GMT

Here's a bit of a cheeky question...

You know all those "conversations" us nerds have about scalability and performance where we endlessly debate about where to put business logic and whether scaling the database is easier than scaling the application servers? Well, how come we never end up talking about how to make arguably the most costly (in terms of both time and $$$) operation of our applications perform better?

The costly operation I'm talking about is the journey our markup makes from the web server to the browser. It's funny, because we'll architect fantastic applications, and then shove absolutely bloated junk markup across the vast, unreliable Internet without a second thought. That *** costs money too... (I'm talking about bandwidth). And it's code that's visible to the world.

(image)



Obscuring HTTP

Tue, 05 Jun 2007 07:09:00 GMT

Ayende has tried to explain why he doesn't like ASP.NET Webforms many times, but based on the comments that pop up on his posts I'm not sure if he's successfully getting his point across. I'll try to help him out in this instance, as I think the same way about not just Webforms, but most other view technologies. This will take more than one post, however, so hopefully I can convince myself to increase my stunning post frequency of the past year in order to properly delve into this issue. First off, let's take a paragraph or two for a brief refresher on HTTP, the protocol that drives the Web as we know it. This will be quick, and I guarantee it will be dirty... The HTTP protocol HTTP is based on a request/response model between a client and a server. The client is assumed to be a web browser in this instance (but can be anything really), and the server is a central location (IP address, domain, URI etc) on the Internet that responds to requests made by the client(s). Responses are generally sent back as HTML documents, but can also be XML, JSON or anything else, really. Each response tells the client what format it is sending via the Content-Type response header. There are many other response headers that provide clues to each client as to what it should do with the body of the response. When a client makes a request to an endpoint, it specifies a verb that provides a clue to the server as to what the client wants it to do. These verbs are as followed: GET - Means that the client is simply requesting to retrieve whatever information resides at the endpoint URI. POST - Used to send information to the server, which the server can then use to do things like update a domain object. When a POST request is completed by the server, it can send response back in the form of 200 (OK) or 204 (No Content), or more likely a redirect to another URI. PUT - Rarely used and not very well supported, this verb is similar to POST in that the client sends information in the request that it expects the server to act on. DELETE - Also rarely used, this one is pretty self explanatory. The expectation of the client is that the requested resource will be deleted. The modern web generally just uses the first two verbs (GET and POST) to get things done, although the latest version of Rails fakes out the PUT and DELETE verbs to more closely match the intended spirit of HTTP. One thing that you may notice is that GET, POST, PUT and DELETE look an awful lot like CREATE, READ, UPDATE and DELETE, but that's a "coincidence" for another post. The way this stuff all gets mashed together to create a usable application on the web is only slightly complicated at the lowest level. In a common use case, a user makes a GET request (through a browser) to a URI that returns an HTML response. The browser then displays the HTML to the user. If the HTML response contains a FORM element, well that's an invitation to the user to change the state of data on the server in some way (maybe by adding a new post to a blog via a few text boxes). When the user clicks the submit button, a POST request is sent to the server that contains all the text the user entered in the HTML textboxes. Once the server receives the request, it's up to the application that drives it to figure out what to do with the data sent by the client. I hope I haven't lost everyone yet, because I swear there's some sort of profound punchline to be found here. The Quest For a Simpler Rube Goldberg Machine Now, I'm sure we can all agree at this point that HTTP is pretty simple. Clients make requests using a verb that may or may not contain data, and the server responds back to the client in whatever way it deems appropriate. The issue that Ayende and I have with Webforms (and Struts, and other view frameworks) is that they take something simple and try to make it different. In the case of Webforms, Microsoft has tried to [...]



Persistence Ignorance

Thu, 15 Mar 2007 04:53:00 GMT

People are talking about Microsoft's Entity framework and how it does not currently allow persistence ignorant domain objects.

I've been torn about this issue for a while now. On the one hand, having an O/R mapper that is persistent ignorant essentially means that it has to support XML mapping files. The downside to this approach is duplication of each entity's properties (which leads to managing them in multiple places), having to edit and maintain these files, and not being able to see mapping information all in one place. This price is often worth it, though.

On the other hand, using attributes to specify mapping information leads to less "code" to manage, and the advantage of having your domain class and mapping information all in one location. The price is that your domain objects have to know about the persistence framework.

The one thing I've observed recently is that most of the Java developers I've talked to who've used Hibernate in the past are excited and relieved that the latest versions support annotations (attributes in .NET) for specifying mapping information. Most of them seem to dislike mapping via XML files, and feel that the price of using annotations is worth it.

It's too bad for Microsoft that nHibernate already supports both methods, so they'll have to as well if they want to keep up.

(image)



Jeff in Java Land

Sun, 31 Dec 2006 03:18:00 GMT

Jeremy over at CodeBetter.com just wrote a post about his experiences in Java Land, and since I've been working on a Java project for over a year and a half now (with at least another year to go), I thought I'd share my thoughts as well.

Before I get into this, I'll divulge some info about myself. I learned Java in school, but had never worked on a real Java project before this one. The majority of my previous projects had been in C#. The Java project I'm currently working on is large, with over 35 developers. We are practicing Domain-Driven Design and TDD, with Eclipse as our IDE.

  • First off, as Jeremy states, the language difference is negligible. It took me a day or two to get back up to speed.
  • Eclipse absolutely destroys Visual Studio as a pure code editor. Java as a language is quite verbose, but Eclipse makes this moot by literally typing 90% of my code for me. It creates classes, interfaces, methods and everything else I'd want, in addition to providing complex refactoring tools out of the box. Its CVS integration is seamless, out of the box. Its TDD integration is seamless (again, out of the box). I've patched together a similar feature set in VS with Resharper and AnkhSVN, but the fact is, the total feature set is still not up to par, and Eclipse is free.
  • My personal opinion is that C# is a superior language. I posted why I think this on my personal blog. The biggest thing I miss when coding in Java is C#'s delegates; I feel like there are a bunch of icky parts in our Java architecture that would look so much sexier if I could just pass around a function.
  • Java's class library is a bit of a mess. As Jeremy states, the date/time handling parts are especially ugly. This is probably because Java's been around longer, has to keep old API's around for backwards compatibility, and C# has had the benefit of learning from Java's mistakes.
  • Checked exceptions are messy. They may be good when you're working with another developer's API, but when you're working within your own domain they're just noise. We have a massive domain in our project, but the points in the code where we actually care about checked exceptions are... I'll say non-existent. There are points where we look for a specific exception but they are few and far between, and could probably have been designed around in the first place.
  • Java's Generics implementation is pure compiler fluff. There's no support at the runtime level like in C#, so even if you create a generic list of Products, you can still add an object that is not a product to that list. The only real reason to use them for lists is so you can get the Java equivalent of C#'s foreach construct to work.

All in all, it should be apparent that I really like where C# is going, especially in 3.0. I get frustrated with the verbosity of Java at times, but for the most part, the utter superiority of Eclipse more than makes up for it. Eclipse plus Java makes me more productive than VS plus Resharper plus AnkhSVN.

(image)



The Specification Pattern

Thu, 14 Dec 2006 05:34:00 GMT

Hot on the heels of my devastatingly fantastic post on an implementation of the Snapshot Pattern, I give you my next piece du resistance. In this little post, I'd like to delve into the Specification Pattern. So what the heck is it? Matt Berther provided a pretty good introduction where he states: It's primary use is to select a subset of objects based on some criteria... That pretty much sums it up. What we want to do is extract out a specification for a subset of objects we might be interested in. We do this by creating specification objects. You'll see something like this in a lot of applications: public List GetHighPricedSaleProducts(){ List list = new ArrayList(); for(Product p in Products){ if(p.IsOnSale && p.Price > 100.0){ list.Add(p); } } return list; } This is fine in small doses, but your definition of a highly priced sale product might change over time, and we want to avoid having our logic for what IsOnSale and what a highly priced object actually is sprinkled throughout our code. One way to avoid this is to extract our logic into a Specification object like so: public class ProductOnSaleSpecification : Specification { public override bool IsSatisfiedBy(Product product) { return product.IsOnSale; } } Now the first loop I wrote can be written like so: public List GetHighPricedSaleProducts(){ List list = new ArrayList(); for(Product p in Products){ if(new ProductOnSaleSpecification().isSatisfiedBy(p) && p.Price > 100.0){ list.Add(p); } } return list; } This is a slight improvement... There's actually more code to write, but now we can separately unit test each specification we create, without worrying about the loop: [Test] public void TestIsSatisfiedBy_ProductOnSaleSpecification() { bool isOnSale = true; Product saleProduct = new Product(isOnSale); Product notOnSaleProduct = new Product(!isOnSale); ProductOnSaleSpecification spec = new ProductOnSaleSpecification(); Assert.IsTrue(spec.IsSatisfiedBy(saleProduct)); Assert.IsFalse(spec.IsSatisfiedBy(notOnSaleProduct)); } Ok, so that's interesting, but we haven't even gone halfway, here. Why don't we refine that loop I wrote to use the new Generic collections in .NET 2.0: public List GetSaleProducts(){ return Products.FindAll( new ProductOnSaleSpecification().IsSatisfiedBy); } Wow, now there's some serious savings on lines of code. "But you're missing the bit about the high priced products from the first example!?!" I hear you saying. Fear not, let's extract that into another specification like so: public class ProductPriceGreaterThanSpecification : Specification { private readonly double _price; public ProductPriceGreaterThanSpecification(double price) { _price = price; } public override bool IsSatisfiedBy(Product product) { return product.Price > _price; } } We're still left with one problem, though. How do we tell the generic list of all products that we want the products that are both on sale and over a certain price? Let's try extracting our functionality into a Specification superclass first. This is what our ProductOnSaleSpecification and ProductPriceGreaterThanSpecification will inherit from. Once that's over with, we can create a CompositeSpecification, which is abstract, and allows us to pass in the left and right sides of a specification "equation." We can then implement yet another subclass (this time of CompositeSpecification) that we'll call AndSpecification. Here it is: public class AndSpecification : CompositeSpecification { public AndSpecification(Specification leftSide, Specification rightSide) : base(leftSide, rightSide) {} public override bool[...]



A Modified Snapshot Pattern

Fri, 17 Nov 2006 04:33:00 GMT

I thought I'd share a solution and ideas around a problem that seems to come up from time to time in past systems I've built. I'll walk you through the implementation of a modified Snapshot Pattern in C#. First off, what is the Snapshot pattern, and why would we use it? Put simply, a Snapshot is the concept of giving date/time context to an object, so that a current time period is factored in to any queries or commands we send to it. Here's an example of a simple problem that it can solve: Say we have a Product object, which generically represents something that a person can purchase from an online store. A Product has both Name and Price properties. The problem is that the Price of a Product can change over time. We want to be able to ask the Product its Price for any possible date, and have it return the Price as it stood at that time. We do not want to have to pass around a DateTime parameter all over the place, however. The following diagram shows us how we want the code to end up looking: There are three main objects that form the backbone of our modified Snapshot pattern: Master: This object is persistent, and holds any data that does not change over time. In our little example with Products, the Name property would hang off a Master class. The Master holds a list of many Details, which I explain next... Detail: This object is also persistent, however it holds any data that may change over time. This is where our Price property lives. You'll notice that the Detail contains StartDate and ExpiryDate properties, which represent the time period during which it is valid. Snapshot: This is a non-persistent object that holds exactly one Master, and one Detail. The Snapshot object delegates its calls to persistent data to both the Master and Detail of which it is composed. It also holds all our business logic. Our code for the Product scenario ends up looking like so: ProductMaster.cs public class ProductMaster : Master { private string _name; public Product CreateSnapshot(DateTime date) { return CreateSnapshot( DateTime.Today); } public string Name { get { return _name; } set { _name = value; } } } ProductDetail.cs public class ProductDetail : Detail { private double _price; public ProductDetail(DateTime start, DateTime expiry) : base(start, expiry) { } public double Price { get { return _price; } set { _price = value; } } } Product.cs public class Product : Snapshot { public double Price { get { return Detail.Price; } } public string Name { get { return Master.Name; } } } The important thing to remember is that within our domain we only want to work with the Snapshot implementation, not the master or the detail. Once we've constructed a Snapshot using the Master.CreateSnapshot(date) method, we're good to go. Hopefully this makes sense. I've used this pattern in two projects now, and very heavily on my current project. It helps to clean up the code base by removing a whole bunch of GetPrice(date) type methods, and frees the developer from having to think about time when utilizing an object. It does require some discipline, however, to ensure you don't get business logic creeping in to the Master and Detail classes. I've also included the code, with some unit tests that you can download. [...]






Thoughts on Code in ASPX Pages

Thu, 26 Oct 2006 03:50:00 GMT

This post was inspired by a comment I saw on another person's blog (which I couldn't find for the life of me, so no linky for them). The comment was in regards to Ruby on Rails use of pure ruby code in the .rhtml view files, and how this was seen as a bad thing. I've written a similar post on my personal blog but I thought my point was worth bringing up again.

How many of you faithful devlicio.us readers would cringe at the sight of the following code snippet embedded within an aspx page?

	<% foreach( Item item in LineItems ){ %>
	    

<%= item.Name %>

<% } %>

On first glance, it looks bad because sometime in the past we were told that embedding business logic in the view is bad. But I'd urge you to look again. Is there actually any business logic in the above snippet? There's none, actually. We're just iterating over a list of Items and printing a property out to the rendered markup. We could accomplish the same thing with a repeater like so:

	
	
	    
	        

<%# Eval("Name") %>

Which one is easier to read? It's pretty subjective... The repeater declaration happens to look just like html or xml markup, so it blends in nicely with any other markup that may appear in our page. This blending can be seen as a good thing, or a bad thing if you like to be able to instantly see any dynamic stuff at a glance. The problem with repeaters it can be hard to see exactly what you're iterating over without also looking into the code-behind. The plus side is that you get nice support for alternating item templates, which can quickly make an embedded foreach look ugly.

The real question is, does using a control like the repeater prevent you from sticking business logic in your views? The answer is no, it doesn't.

Just something to think about.

(image)



Know Yourself

Tue, 24 Oct 2006 08:31:00 GMT

A couple of months ago I commented on Eric Wise's post titled Know Your Role, wherein Eric expounded on the need for a DBA on software projects. I took exception to the rule that DBA's are required, which sparked several comments and blog posts that seemed to fall just short of calling me an idiot for thinking this way. I don't have any real objections to having my ideas questioned... I'm not arrogant enough to think that I know everything. So I took a step back and thought about it for a while, and here's what I came up with. I believe that one of the only absolute truths that can be stated is that there are no absolutes. Very rarely is there a clear cut line drawn in the sand. One of my comments drew upon the experience I have gained as a developer on a large project. We have no formal DBA who controls or greatly influences our database design. We do have lots of developers with a great deal of experience designing systems, as well as several who were DBA's or database programmers in past lives. I asked around a bit, trying to get a feel for what the actual SQL gurus thought of our database design. The general consensus was that it was pretty good. Not perfect, but better than most. Along the way, I did hear a few good stories about DBA's which may account for the developer comments paraphrased in Eric's post ("DBAs only get in the way" and "I'm not doing anything fancy, I don't need no stinking DBA"). In the minds of some of the developers I talked to, the term DBA is a synonym for difficult. I heard tales of entire database designs being entrusted to one sacred individual who was the only person authorized to make changes because someone, somewhere decided that they knew what was best. Tales of insane hacks being made to the application code to work around a schema that couldn't be changed, yet no longer represented the business. I'm guessing that this extreme is what has so many developers spooked. Now I'm sure someone can point out the inverse scenario... You'll definitely find several if you browse through TheDailyWtf.com archives. The fact of the matter is that "DBA" is just a label. As is "Developer." Having a bad DBA controlling the database on a project may be worse than not having one at all. Having a good DBA who fits in with the team, enabling them to build the system to the specification of the clients with as little friction as possible is definitely an asset. I still believe that most projects don't need someone to fill the DBA role. They need a group of developers who know what they're doing, from the database up to the presentation layer. I don't think that good database design is that hard for someone who's a little clueful, but it's always a good idea to have everyone involved in the conversation throughout the development process. The more eyes on the problem, the less chance there is of something being missed. I hope this makes sense, as I'm not sure I've presented my ideas exactly the way I intended. Keep in mind that my frame of reference for this post is based on green-field development of a project. I do realize that database lock downs may be required in some situations. I'm just taking exception to a supposed rule.[...]



Regarding an Agile Backlash

Fri, 06 Oct 2006 04:25:00 GMT

It's coming, as we all knew it would. All in all, I think this is a bit of a shame. I'm not one to speak for others, but I'm sure the "originators" (for want of a better term) of agile processes like Scrum and XP are a little disconcerted to see their ideas turned into nothing more than fodder for technical books and recruitment checklists. (I could be horribly wrong about this. I know there are some people who think this was the entire point from the beginning, but I'm not that cynical yet). The thing that turns so many people off is the steady evolution of the word "agile" into its sexier buzzword connotation of "Agile." These days, everything is fucking Agile this, Agile that. Let's not be distracted by the buzzword... Instead, try to remember the point of it all, and realize that these are just solutions to problems developers have had since the beginning of time. One of the most stressful parts of being a developer has to be deployment. And especially re-deployment. We want our stuff to work, and we want the people who use our software to be happy with it. I used to absolutely dread uploading changes to an existing site because I was never confident I hadn't introduced new bugs into the code base. Having unit tests solved that for me, personally. I am now pretty sure my code hasn't regressed because of my adoption of this one practice. I happen to enjoy writing my tests first, before I write the code to make them pass, but that's a personal choice and it doesn't detract from the main purpose; being confident in my code. At the very least I'm less stressed out, which is a good thing. I've worked places before where the developers didn't talk to one another. Stuff would get duplicated needlessly, and developers would go for days down paths that others had previously trodden. Having daily standup meetings where each developer goes over what they worked on yesterday, what they'll be working on today, and what's impeding there progress goes miles towards solving this. I'm not saying that this is the only solution, but it is a good, simple one. Why does it suck so much when Uber-Developer A goes on holidays for a few weeks? Is it because they're the only one working on a project (or a sizable chunk thereof) and if anything goes wrong while they're gone everyone else is screwed? Why do some developers get so damned smart so fast, while others wallow in mediocrity alone in their offices for months? Is it because Mediocre-Developer B is useless? Try spreading the love around by pair-programming for a while. You don't have to be religious about it, but I've found that having at least some time together with another developer can go a long way towards alleviating the above problems. How does our code get into a state whereby it becomes extremely hard to add new features? Why do we need 5 lines of comments to explain what a function does? It sure would be nice if we could just look at it for a few seconds to let the understanding sink in. By ruthlessly refactoring our code before it reaches this state we can fight the entropy. It also helps if our unit test suite can be run afterwards to make sure we didn't change the meaning of the code. The bottom line, and the point I'm trying to make, is that the little bits and pieces that make up "Agile" processes are there to solve very real problems that we as developers have. In the end, it doesn't really matter how these problems are solved or what we call our processes. The purpose is to make our lives as developers easier. The whole point of being agile is to be able to react to change[...]



.NET 2.0 (Almost) a Year Later

Sun, 24 Sep 2006 18:35:00 GMT

Well, according to Wikipedia the 2.0 RTM was released in November of last year (and we all know Wikipedia is always right... Except for last night when I went to the Jaguar page and the only text for the entire article was F**K YOU!).

Anyhow... My point is that the Framework saw some pretty big changes in the jump from 1.1 to 2.0, and I thought it would be fun to ask everybody that might be reading this to go through a little exercise of self reflection with me. Nothing "new-agey," I promise. I just want to know which of the new features that showed up in 2.0 are actually being used vs. the complete and utter duds (like themeing in ASP.NET).

My "being used" list:

  • Master Pages, which had to be manually hacked in with ASP.NET 1.1.
  • Generics, of course. This was a fantastic addition to the framework, and not just because they let you have strongly typed lists out of the box. Take a look at Jean-Paul Boodhoo's post on validation in the domain layer for some other practical uses of generics. (As an aside, I'll be linking to JP a lot on this blog. He's a bald, boyish ninja of practical articles).
  • The System.Collections.Generic.List class. I'm not talking about the generic stuff here, but rather the new methods like FindAll, FindLast, RemoveAll, TrueForAll, etc. These are super-wicked and often overlooked methods that will one day (when C# 3.0 comes out) lead to super-concise ruby-like code. In the meantime, they allow us to write code without foreach all over the place.
  • Anonymous delegates are a feature that, when combined with the generic List class, I find myself using more and more. We can currently (in 2.0) write code like this:
	myList.FindAll(delegate(User user) 
{ return user.Name == “Jorge”; };
);

I've probably missed a few things in this list. Now here's my "un-used" feature list:

  • Anything new in ASP.NET other than master pages. This includes themes, new controls, the new project-less model, and probably a few other things. As Sam Smoot put it ASP.NET is a Rube Goldberg Machine. I can't help but think that in the never-ending quest to make programming so simple even a monkey can do it, we've lost site of the forest for the trees.
  • Iterators are something I've never personally used yet.
  • Ditto partial classes. I've inadvertently used them in ASP.NET because that's the way code-behind now works (or is it code-beside?). This is something that is best used for code generation, but I've never consciously used the feature.

Once again, I'm sure this list is incomplete, so I'd like to hear what other's think. Speak!

(image)