Subscribe: Geek Noise
Added By: Feedage Forager Feedage Grade B rated
Language: English
api  code  don  ldquo  new  public  rdquo  studio  test  testing  tests  things  unit  visual studio  visual  web api 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Geek Noise

Peter Provost's Geek Noise

Updated: 2013-11-07T04:55:55+00:00


Playing around with JavaScript modules


Recently I’ve been playing around a lot with JavaScript modules. The particular use case I’ve been thinking about is the creation of a large complex JavaScript library in a modular and sensible way. JavaScript doesn’t really do this very well. It does it so poorly, in fact, that a sizeable number of projects are all done in a single file. I did fine a number that used file concatenation to assemble the output scrpit, but this seems like a stone-age technique to me. This led me to look at the two competing JavaScript module techniques: Asynchronous Module Definition (AMD) and CommonJS (CJS). AMD is the technique used in RequireJS) and CommonJS is the technique used by nodejs. The RequireJS project has a script called r which will “compile” a set of AMD modules into a single file. There are other projects like Browserify which do the same thing for a collection of CommonJS modules Basically, all of these figure out the ordering from the dependencies, concatenate the files, and inject a minimalistic bootstrapper to provide the require/module/exports functions. Unfortunately, this means that they all have the downside of leaving all the ‘cruft’ of the module specification in the resulting file. To illustrate what I mean by ‘cruft’, I will use one of the examples from the browserify project. This project has three JavaScript files that use the CommonJS module syntax, and depend on each other in a chain. bar 1 2 3 module.exports = function(n) { return n * 3; } foo 1 2 3 4 var bar = require('./bar'); module.exports = function(n) { return n * bar(n); } main 1 2 var foo = require('./foo'); console.log(foo(5)); When I run this through browserify, it produces this output: out 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ;(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){ var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0); if(i)return i(o,!0);throw new Error("Cannot find module '"+o+"'") }var f=n[o]={exports:{}};t[o][0].call(f.exports,function(e){ var n=t[o][1][e];return s(n?n:e)},f,f.exports,e,t,n,r)}return n[o].exports }var i=typeof require=="function"&&require;for(var o=0;o

What Really Matters in Developer Testing


Last month I wrote an article for Visual Studio Magazine called, “What Really Matters in Developer Testing” that I wanted to share with my readers here. Note: They changed the title and made a few other tweaks here and there, so this is my original manuscript as opposed to the edited and published version. Enjoy! If you’d prefer, you can read the published article here: What Really Matters In Developer Testing It isn’t about the tests, it is about the feedback by Peter Provost – Principal Program Manager Lead – Microsoft Visual Studio Introduction Recently someone asked me to provide guidance to someone wanting “convince development teams to adopt TDD”. I wrote a rather long reply that was the seed leading to this article. My answer was simple: you cannot convince people to do Test-Driven Development (TDD). In fact, you should not even try because it focuses on the wrong thing. TDD is a technique, not an objective. What really matters to developers that their code does what they expect it to do. They want to be able to quickly implement their code, and know that it does what they want. By focusing on this, we can more easily see how developers can use testing techniques to achieve their goals. All developers do some kind of testing when they write code. Even those who claim not to write tests actually do. Many will simply create little command line programs to validate expected outcomes. Others will create special modes for their app that allow them to confirm different behaviors. Nevertheless, all developers do some kind of testing while working on their code. What Test-Driven Development Is (and Isn’t) Unlike ad-hoc methods like those described above, TDD defines a regimented approach to writing code. The essence of it is to define and write a small test for what you want before you write the code that implements it. Then you make the test pass, refactor as appropriate and repeat. One can view TDD as a kind of scientific method for code, where you create a hypothesis (what you expect), define your experiment (the tests) and then run the experiment on your subject (the code). TDD proponents assign additional benefits to this technique. One of the most important is that it strongly encourages a highly cohesive, loosely coupled design. Since the test defines the interface and how dependencies are provided, the resulting code typically ends up easy to isolate and with a single purpose. Object-oriented design defines high cohesion and loose coupling to be essential properties of well-designed components. In addition, TDD provides a scaffold for when the developer is unsure how to implement something. It allows the developer to have a conversation with the code, asking it questions, getting answers and the adjusting. This makes it a great exploratory tool for the understanding something new. Test-driven development is not a panacea, however. While the tests produced can serve to protect from regressions, they are not sufficient on their own to assess and ensure quality. Integration tests that combine several units together are essential to establish a complete picture of quality. End-to-end and exploratory testing will still be required to evaluate the fitness of the entire system. In short, TDD is another tool that the developer can use while writing code. It has benefits but it also has costs. It can help you define components with very little upfront definition. However, it will add to the time required to create the code. It has a strong, positive impact on the design of your code, but it requires practice and experience to learn, and can be frustrating for the novice. Short-cycle Mode When we think about test-driven development as a tool, we recognize that like all tools, it has times when it is effective and times when it is not. As my father used to tell me, “You can put in a screw with a hammer, but it probably isn’t the best choice.” [...]

HTML5 Frameworks


For a couple of weeks I’ve been playing around with some of the updated tools I use to make this blog. Back in April 2012, I pulled all of my content out of a server-side ASP.NET blog engine and moved to Jekyll and Octopress. Honestly, I can’t see myself going back.

But it has been more than a year since I created the current skin, and it was time for change. Also, Jekyll has matured a lot and many of the things that Octopress brought to the table are no longer needed. So I decided to kill two birds with one stone and update the whole thing… generator and skin.

Of course I want a responsive layout, and for a long time my go-to framework has been Twitter Bootstrap. But TWBS has a few issues that have started to bug me, most notably the way it handles font-sizes. So I decided to begin an investigation of available frameworks and toolsets.

I’m not sure if you’ve tried searching for “html5 template”, but I will tell you that it results in a a big list of “free, fresh web design templates”. Nothing particular interesting or useful there. A few refining searches and clicks later landed me at the Front-end Frameworks repository owned by This is the list the search engines were failing to provide for me.

You can clone the repository if you want, but since it is really just the code for the compare website, I would recommend you star it instead (so you know when changes happen) and then visit the CSS Front-end Frameworks Comparison website itself.


As you can see, it gives you a nice list of the top frameworks, annotated with useful bits like mobile/tablet support, browser support, license, etc. Great stuff and certainly a link to keep handy.

Interviewed on Radio TFS



Update 2013-08-29: We got mentioned in This Week on Channel 9 today. Woot!

This morning I did an interview on Radio TFS, hosted by Martin Woodward and Greg Duncan. The topic was “What have you been working on since Visual Studio 2012”, and we had a great time talking about all the cool stuff we’ve done in the VS2012 updates and what we’re targeting for Visual Studio 2013.

You can download & listen to the interview here:
Episode 64: Peter Provost on Visual Studio 2013 Ultimate

Many thanks to Martin and Greg for having me. It was fun and I’m looking forward to doing it again so we can talk more about developer testing.

Visual Studio 2012 Fakes - Part 3 - Observing Stub Behavior


This year at both TechEd North America and TechEd Europe I gave a presentation called “Testing Untestable Code with Visual Studio Fakes”. So far VS Fakes has been very well received by customers, and most people seemed to understand my feelings about when (and when not) to use Shims (see Part 2 for more on this). But one thing that has consistently come up has been questions about Behavioral Verification. I talked about this briefly in Part 1 of this series, but let me rehash a few of the important points: Stubs are dummy implementations of interfaces or abstract classes that you use while unit testing to provide concrete, predictable instances to fulfill the dependencies of your system under test. Mocks are Stubs that provide the ability to verify calls on the Stub, typically including things like the number of calls made, the arguments passed in, etc. With Visual Studio Fakes, introduced in Visual Studio 2012 Ultimate, we are providing the ability to generate fast running, easy to use Stubs, but they are not Mocks. They do not come with any kind of behavioral verification built in. But as I showed at TechEd Europe, there are hooks available in the framework that allow one to perform this kind of verification. This post will show you how they work and how to use them to create your own Mocks. Why would you need to verify stub calls? There is a long-running philosophical argument between the “mockists” and the “classists” about whether mocking is good or bad. My personal take is that they can be very useful when unit testing certain kinds of code, but also that they can cause problems if overused, because rather than pinning down the external behavior of a method, they pin the implementation. But rather than dwell on that, lets look at some of the cases where they are valuable. Suppose you have a class who’s responsibility is to coordinate calls to other classes. These might be classes like message brokers, delegators, loggers, etc. The whole purpose of this class is to make predictable, coordinated calls on other objects. That is the external behavior we want to confirm. Consider a system like this: System Under Test1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 // ILogSink will be implemented by the various targets to which // log messages can be sent. public interface ILogSink { void LogMessage(string message, string categories, int priority); } // MessageLogger is a service class that will be used to log the // messages sent by the application. Only the method signatures // are shown, since in this case we really don't need to care // about the implementation. public class MessageLogger { public void RegisterMessageSink(ILogSink messageSink); public void LogMessage(string message); public void LogMessage(string message, string categories); public void LogMessage(string message, string categories, int priority); } What we need to confirm is that the right calls are made into the registered sinks, and that all the parameters are passed in correctly. When we use Stubs to test a class like that, we will be providing fake versions of the ILogSink so we should be able to have the fake version tell us how it was called. Behavioral verification using closures I showed in my previous posts how you can combine lambda expressions and closures to pass data out of a stub’s method delegate. For this test, I will do this again to verify that the sink is being called. Testing that the sink is called1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [TestMethod] public void VerifyOneSinkIsCalledCorrectly() { // Arrange var sut = new MessageLogger(); var wasCalled = false; var sink = new StubILogSink { LogMessageStringStringInt32 = (msg, cat, pri) => wasCalled = true }; sut.RegisterMessageSink(sink); // Act sut.LogMessage("Hello there!"); // Assert Assert.IsTrue(wasCalled); [...]

My path to Inbox Zero


I know this post is probably going to make a lot of people say, “Holy crap, man. If you need that much of a system, you get too much email.” All I can say is “Guilty as charged, but I know I’m not the only one with this problem.” So if you find this useful, great. If not, move on. I’ve long been a fan of the whole Inbox Zero idea. While the concept of using other kinds of task lists (e.g. Outlook Tasks, Trello or Personal Kanban) is nice, in my experience the Inbox is a much more natural place to keep track of the things I need to do. Like it or not, a large number of us in the tech sector use email as our primary personal project management system. I don’t think this is just a “Microsoft PM” thing, but certainly the amount of email that happens here, in this role, makes it more the case. Scott Hanslelman has written a few times about his rule system on his blog. In his most recent post on this topic, he classifies emails into four big buckets: Most Important - I am on the To line Kind of Important - I am CC External - From someone outside the company Meeting invites I tried his system for a while, but I found that I still ended up with too many emails in my Inbox that weren’t high priority for me to read, but that I may need to find later via search. This brings up a good point: My system depends on the idea that I will TRAF (Toss, Refer, Act, File) the things in my Inbox quickly. Any email which doesn’t meet certain criteria (more on that in a second) will be Filed by default, and if I need to I can find them later with search. Naming rules My system leverages the power of Inbox rules. I use Outlook, but most email clients (web or local) have something like them, but you may need to adjust the specifics to your client. This system also ends up with a lot of single purpose rules. I like this because it lets me see in the rules management UI what I have. I tried the great big “Mailing Lists” rule before and it was difficult to keep track of. Naming the rules is an important part of keeping track of it all. My general rule naming pattern is like this: - Action is typically things like “Move” or “Don’t move”. Criteria is typically things like “to:(some mailing list)”. I tend to use the descriptive name of things in the rule name rather than the email address or alias name. This, again, helps me see at a glance my rules and the priority stack. How does it work? As I said, I do this in Outlook, so it is important to understand that Outlook will process the rules in order, from top to bottom, only stopping if a rule says explcitly to stop. The essence of this system is the final rule in the stack which moves everything to an archive folder, and then the rules before which pick out what they care about and stop the processing chain. In Outlook, I do this with the “Stop Processing More Rules” action that has no conditions (i.e. it matches all arriving email). To make this rule, you can’t use the simple rules creation UI. Instead click Advanced Options. Leave everything in the conditions section blank and click Next. It will warn you about the rule being applied to all messages, and since this is what we want, click Yes. Then in the Actions section (2nd page) near the bottom, check “stop processing more rules”. It should look something like this: Rule Priorities Over time I’ve learned that my rules come in a prioritized set of groups. Each rule takes a look at the mail and makes a simple decision: keep it in the Inbox, move it somewhere, or pass and let the next rule give it a try. This means that it is worth putting a bit of thought into the priority of the rules so you get expected outcomes from the tool. My groups, in priority order and with an example or two, are listed [...]

Introducing Visual Studio 2012 (Video)


Near the end of the development cycle for Visual Studio 2012, a group of folks in the VSALM team (led by my very creative manager Tracey Trewin) came up with this cool animated video introducing some of the great new features in Visual Studio 2012 Ultimate. I think it is pretty cool, and even pretty funny, so I wanted to share it with you all.

src=" ">

What do you think?

Adding Ninject to Web API


In my last post I focused on how to unit test a new Visual Studio 2012 RC ASP.NET Web API project. In general, it was pretty straightforward, but when I had Web API methods that needed to return an HttpResponseMessage, it got a little harder. If you recall, I decided to start with the Creating a Web API that Supports CRUD Operations tutorial and the provided solution that came with it. That project did not use any form of dependency inversion to resolve the controller’s need for a ProductRepository. My solution in that post was to use manual dependency injection and a default value. But in the real world I would probably reach for a dependency injection framework to avoid having to do all the resolution wiring throughout my code. In this post I am going to convert the manual injection I used in the last post to one that uses the Ninject framework. Of course you can use any other framework you wanted like Unity, Castle Windsor, StructureMap, etc. but the code that adapts between it and ASP.NET Web API will probably have to be different. Getting Started First let’s take a look at the code I had for the ProductsController at the end of the last post, focusing on the constructors and the IProductRepository field. Manual Depdendency Injection1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 namespace ProductStore.Controllers { public class ProductsController : ApiController { readonly IProductRepository repository; public ProductsController() { this.repository = new ProductRepository(); } public ProductsController( IProductRepository repository ) { this.repository = repository; } // Everything else stays the same } } The default constructor provides the default value for the repository, while the second constructor lets us provide one. This is fine when we only have a single controller, but in a real-world system we will likely have a number of different controllers, and having the logic for which repository to use spread among all those controllers is going to be a nightmare to maintain or change. By default, the ASP.NET Web API routing stuff will use the default constructor to create the controller. What I want to do is make that default constructor go away, and instead let Ninject be responsible for providing the required dependency. A quick aside - Constructor injection or Property injection? This came up in one of my talks last week at TechEd, so it probably warrants some discussion here. When Brad Wilson and I made the first version of the ObjectBuilder engine that hides inside Unity and the various P&P CAB frameworks, we got to have this argument with people all the time. While this argument looks like another one of those “philosophical arguments” that doesn’t have a right answer, I don’t think it really is. I think the distinction between constructor injection and property injection is important, and I think you can find yourself using both depending on the circumstances. Here’s the gist of my argument: If the class would be in an invalid state without the dependency, then it is a hard dependency and should be resolved via constructor injection. It cannot be used without it. Putting the dependency on the constructor and not providing a default constructor makes it very clear to the developer who wants to consume this class. The developer is required to provide the dependency or the class cannot be created. If you find yourself doing a null check everywhere the dependency gets used, and especially if you throw an exception when it is null, then you likely have a hard dependency. But if the class has a dependency that either isn’t required, or that will use a default object or a null object if it is not provided, then it is a soft dependency and should not be resolved via constructor i[...]

Unit Testing ASP.NET Web API


A couple of days ago a colleague pinged me wanting to talk about unit testing an ASP.NET Web API project. In particular he was having a hard time testing the POST controller, but it got me thinking I needed to explore unit testing the new Web API stuff. Since it is always fun to add unit tests to someone else’s codebase, I decided to start by using the tutorial called Creating a Web API that Supports CRUD Operations and the provided solution available on What should we test? In a Web API project, one of the things you need to ask yourself is, “What do we need to test?” Despite my passion for unit testing and TDD, you might be surprised when I answer “as little as possible.” You see, when I’m adding tests to legacy code, I believe strongly that you should only add tests to the things that need it. There is very little value-add in spending hours adding tests to things that might not need it. I tend to follow the WELC approach, focusing on adding tests to either areas of code that I am about to work on, or areas that I know need some test coverage. The goal when adding tests for legacy code like this is to “pin” the behavior down, so you at least can make positive statements about what it does do right now. But I only really care about “pinning” those methods that have interesting code in them or code we are likely to want to change in the future. (Many thanks to my friend Arlo Belshee for promoting the phrase “pinning test” for this concept. I really like it.) So I’m not going to bother putting any unit tests on things like BundleConfig, FilterConfig, or RouteConfig. These classes really just provide an in-code way of configuring the various conventions in ASP.NET MVC and Web API. I’m also not going to bother with any of the code in the Content or Views folders, nor will I unit test any of the JavaScript (but if this were not just a Web API, but a full web app with important JavaScript, I would certainly think more about that last one). Since this is a Web API project, its main purpose is to provide an easy to use REST JSON API that can be used from apps or web pages. All of the code that matters is in the Controllers folder, and in particular the ProductsController class, which is the main API for the project. This is the class we will unit test today. Unit Tests, not Integration Tests Notice that I said unit test in the previous sentence. For me a unit test is the smallest bit of code that I can test in isolation from other bits of code. In .NET code, this tends to be classes and methods. Defining unit test in this way makes it easy to find what to test, but sometimes the how part can be tough because of the “in isolation from other bits” part. When we create tests that bring up large parts of our system, or of the environment, we are really creating integration tests. Don’t get me wrong, I think integration tests are useful and can be important, but I do not want to get into the habit of depending entirely on integration tests when writing code. Creating a testable, cohesive, decoupled design is important to me. It is the only way to achieve the design goal of simplicity (maximizing the amount of work not done). But in this case we will be adding tests to an existing system. To make the point, I will try to avoid changing the system if I can. Because of this we may find ourselves occasionally creating integration tests because we have no choice. But we can (and should) use that feedback to think about the design of what we have and whether it needs some refactoring. Analyzing the ProductsController class The ProductsController class isn’t too complex, so it should be pretty easy to test. Let’s take a look at the code we got in the download: ProductsContro[...]

Anatomy of a File-based Unit Test Plugin



I’ve been working on a series of posts about authoring a new unit test plugin for Visual Studio 2012, but today my friend Matthew Manela, author of the Chutzpah test plugin, sent me a post he did a few days ago that discusses the main interfaces he had to use to make his plugin.

The Chutzpah plugin runs JavaScript unit tests that are written in either the QUnit or Jasmine test frameworks. Since JavaScript files don’t get compiled into a DLL or EXE, he had to create custom implementations of what we call a test container.

A test container represents a thing that contains tests. For .NET and C++ unit testing, this is a DLL or EXE, and Visual Studio 2012 comes with a built-in test container subsystem for them. But when you need to support tests contained in other files, e.g. JavaScript files, then you need to do a bit more.

Matthews post does a great job of going through each of these interfaces, discussing what each is for and what he did for it in his plugin.

The Chutzpah test adapter revolves around four interfaces:

1. ITestContainer – Represents a file that contains tests
2. ITestContainerDiscoverer – Finds all files that contain tests
3. ITestDiscoverer – Finds all tests within a test container
4. ITestExecutor – Runs the tests found inside the test container

You can read the entire post here:

And if you want to see the source for his plugin, you can read it all here (look for the VS11.Plugin folder on the left side):

Nice post Matthew! Thanks for providing it to the community.

My VS2012 Wallpapers


I’m getting ready for TechEd 2012 this month and realized I wanted to have a nice desktop wallpaper for my laptop. Well, I’d not seen any popping up on twitter or blogs yet, so I grabbed my trusty GIMP image editor and some base images of the new VS 2012 logo and got to work.

Here are a few of my favorites, but I’ve not decided which I will use on my laptop.

src="" width="320" height="200" frameborder="0" scrolling="no"> src="" width="320" height="200" frameborder="0" scrolling="no"> src="" width="320" height="200" frameborder="0" scrolling="no">

You can see/download them all by clicking on one of the links above or you browse them on my SkyDrive. They are all 1680x1050 PNG files and have been compressed with PNGOUT to be as small as possible. I may end up making more so keep an eye on that folder if you’re interested.

If you find them useful, please let me know.

My Speaking Schedule - 2012


I tweeted a bit about the craziness coming up for me in June, but I realized today that I’ve not posted the whole schedule. So if you’re interested and wondering what I’ll be talking about, here’s the complete list (in chonological order). The first two are now behind me, but the twin TechEds are coming up fast. Also, I’ve got a couple that are currently “maybes” so once they are confirmed I’ll update this post. If you want to contact me about speaking at an event, please use my contact page. Update 2012-06-06 - I had to pull out of Agile.NET Columbus due to a conference date change that wasn’t compatible with my schedule. Bummer. Microsoft TechReady 14 January 30 - February 3, 2012 – Seattle, WA Introducing the New VS11 Unit Testing Experience - Ramping up the field on where we’re taking the Unit Test experience in VS11 and beyond. Microsoft MVP Summit February 28 - March 2, 2012 – Bellevue, WA Agile Development in VS11 - Unit Testing, Fakes and Code Clone - Bringing the MVPs up to speed on the new stuff in VS11 Beta. TechEd North America 2012 June 11-14, 2012 – Orlando, FL DEV214 - Introducing the New Visual Studio 11 Unit Testing Experience - An updated version of the talk I’d previously given for internal and MVP audiences based on current bits and plans. AAP401 - Real World Developer Testing with Visual Studio 11 - David Starr and I will go through a bunch of real-world unit test scenarios sharing our tips and tricks for getting them under test while ensuring they are still good unit tests. DEV411 - Testing Un-testable Code with Fakes in Visual Studio 11 - A deep dive into VS 2012 Fakes, focusing on things that are either hard to unit test or that you might think aren’t unit testable at all. DEV318 - Working on an Agile Team with Visual Studio 11 and Team Foundation Server 11 - Gregg Boer and I will take you through the lifecycle of an agile team using all the great new features in VS 2012 and TFS 2012. TechEd Europe 2012 June 26-29, 2012 – Amsterdam, RAI Same as TechEd North America 2012 Denver Visual Studio User Group July 23, 2012 – Denver, CO Unit Testing and TDD with Visual Studio 2012 - In Visual Studio 11, a lot of changes have happened to make unit testing for flexible and powerful. For agile and non-agile teams who are writing unit tests, these changes will let you work faster and stay focused on your code. In this session Peter Provost, long time TDD advocate and Senior Program Manager Lead in Visual Studio ALM Tools and designer of this new experience, will take us through the experience, focusing on the what the biggest differences are and why they are important to developers. Agile 2012 August 13-17, 2012 – Dallas, TX Talk Title TBD [...]

My Take on Unit Testing Private Methods


The simple answer: Just Say No™ Every now and then I get an email, see a forum post, or get a query at a conference about this topic. It typically goes like this: “When you’re doing TDD, how do you test your private methods?” The answer of course, is simple: You don’t. And you shouldn’t. This has been written up in numerous places, by many people smarter than I, but apparently there are still people who don’t understand our point. Private Methods and TDD I do create private methods while doing TDD, as a result of aggressive refactoring. But from the perspective of the tests I’m writing, these private methods are entirely an implementation detail. They are, and should remain, irrelevant to the tests. As many people have pointed out over the years, putting tests on internals of any kind inhibits free refactoring. They are internals. They are private. Keep them that way. What I like to do every now and then, when I’m in the green phase of the TDD cycle, is stop and look at the private methods I’ve extracted in a class. If I see a bunch of them, it makes me stop and ask, “What is this code trying to tell me?” More often than not, it is telling me there is another class needed. I look at the method parameters. I look at the fields they reference. I look for a pattern. Often, I will find that a whole new class is lurking in there. So I refactor it out. And my old tests should STILL pass. Now it is time to go back and add some new tests for this new class. Or if you feel like being really rigorous, delete that class and write it from scratch, using TDD. (But I admit, I don’t really ever do that except for practice when doing longer kata.) At the risk of repeating myself, I’ll say it one more time: Just Say No™ What about the public API? I most often hear this from people who’ve grown up creating frameworks and APIs. The essence of this argument is rooted in the idea that making something have private accessibility (a language construct) somehow reduces the supported public API of your product. I would argue that this is a red herring, or the fallacy of irrelevant conclusion. The issue of “supported public API” and “public method visibility” are unfortunately related only by the use of the word public. Making a class or method public does not make it a part of your public API. You can mark is as “for internal use only” with various attributes or code constructs (depending on your language). You can put it into an “internals only” package or namespace. You can simply document it in your docs as being “internal use only”. All of these are at least as good, if not better, than making it private or internal. Why? Because it frees the developers to use good object-oriented design, to refactor aggressively and to TDD/unit test effectively. It makes your code more flexible and the developers are more free to do what they are asked to do. But customers will use it anyway! Consider the answer to these questions: You have an API that has public accessibility, and it is marked Internal use only in any of the ways I mentioned above. A customer calls you up and says “When I use this API it doesn’t work as expected.” What is your response? You have an API that has public accessibility, and it is marked Internal use only in any of the ways I mentioned above. You changed the API between product versions. The customer complains that the API changed. What is your response? In each case, I would argue that the answer is the same. You simply say, “That API is not for public consumption. It[...]

Updated NUnit Plugin for VS11 Released


Good news!! Last night I got an email from Charlie Poole, the maintainer of NUnit pointing me to a blog post he’d just made:

Today, I’m releasing version 0.92 of the NUnit Adapter for Visual Studio and
creating a new project under the NUnit umbrella to hold it’s source code, track
bugs, etc.

In case you didn’t know, Visual Studio 11 has a unit test window, which
supports running tests using any test framework for which an adapter has been
installed. The NUnit adapter has been available on Code Gallery since the
initial VS Developer preview. This release fixes some problems with running
tests that target .NET 2.0/3.5 and with debugging through tests in the unit
test window.

This is great news because if you’ve tried to use NUnit with VS11 Beta before now, you probably noticed that you couldn’t actually run or debug a selected test or tests. When you tried, you ended up either getting a failed run or having all tests run. Clearly not good.

The fix was pretty simple, and I want to thank our team for helping find the issue and also of course thank Charlie for getting it out to customers to get them unblocked while using NUnit.

He’s also pushed some new content about the plugin:

So if you are experimenting with VS11 and are an NUnit user, be sure to get this update.

And of course, keep the feedback on VS11 Unit Testing coming!

Games I'm Enjoying Lately


I first started playing games on computers the day after we got our first computer way back in 1978 or 1979 on our venerable Ohio Scientific Challenger 4P. Man, those were good times. Tanks and mazes… that was about all we had. In the 80s, when we lived in Egypt and had an Apple II+, I would type the games in, line-by-line, from the backs of magazines. I’m still convinced that I really learned to debug computer programs back in these days. And I’ve been a gamer ever since. Lately, I’ve been playing a bunch of different games, looking for something that I can really get into for a while. Although I’ve had fun playing almost all of them, I’ve still not found any with real staying power for me. But I have had a chance to play a number of different games on a few different platforms. Here are some review of what I’ve been playing lately, and a look to what I’m waiting for. What I’ve Been Playing World of Warcraft - PC Metacritic Score: 90-93 Yes, I still play Wow. I’ve been playing since the tail-end of so-called “Vanilla Wow”, but I have severely cut back on the amount of time I play it. I still raid two nights a week with my long time guildmates and friends but on non-raiding nights, I just can’t seem to bring myself to login and play. The thing I always liked about Wow was the team & social nature of the game. Working together with 5, 10 or 25 people (or even 40 in the olden days) to figure out how to defeat one of the dungeons is a blast. Problem solving, leadership, joking, laughing and then the joy of victory. But while the non-raiding parts of the game used to keep me engaged, it has lost its luster, which is why I’ve been dabbling in so many others. Dead Space 2 - PC Metacritic Score: 87 This game can be pretty scary at times, as you expect from a survival horror FPS. Tons of scary shit coming at you all the time. The upgrade and leveling system was a bit weird, but I muddled my way through and seemed to maintain decent killing power. The magnetic grip things is a blast to kill with. You can grab steel rods, saw blades, etc. (kinda like Half-Life) and rip the monsters in half. Great fun and recommended if you like the scary stuff. Like so many FPS/RPG games though, I didn’t actually finish it before moving on to something else. Skyrim - PC Metacritic Score: 94 The whole world was raving about Skyrim a few months ago, so I had to give it a try. I got it for a steal during one of Steam’s special sales, and I have to admit it is a fun game. When the modding community really got rolling that was fun too. But I have a problem in general with single player RPGs, and it is that I just start to get bored. I don’t really “role play”, so for me the fun is in the puzzles and challenges. In every single player RPG I’ve played I find that once I figure out the basic fighting mechanics, it becomes very rote, very quickly. I will say, this game is one of the most beautiful game worlds I’ve played in a long time. If you install the HD textured pack (free downloadable content) it is even better, but make sure you have a good graphics card. I still may return to Skyrim for some more dragon killing and whatnot, but it just hasn’t been able to draw me back in more than a month. Tom Clancy’s Splinter Cell: Conviction - PC Metacritic Score: 83 Sneak and spy RPGs are a lot of fun, especially when you’re first getting started. This one has very nice graphics, and a pretty reasonable set of controls and mechanics. Like many games that are also console games, there aren’t as man[...]

Kata - The only way to learn TDD


Lately I’ve been asked by more and more people inside Microsoft to help them really learn to do TDD. Sure, they’ve read the books, and probably some blog posts and articles, but they are struggling to figure it out. But with so many people talking about how good it is they are frustrated that when they try to apply it at work, in their day job, it just doesn’t seem to work out. My guidance to them is simple. Do a TDD kata every morning for two weeks. Limit yourself to 30 minutes each morning. Then pick another kata and do it again. What is a TDD Kata? Kata is a Japanese word meaning “form”, and in the martial arts it describes a choreographed pattern of movements used to train yourself to the level of muscle memory. I study Kenpo myself and have a number of kata that I practice regularly both in training and at home. The idea of kata for software development was originally coined by Dave Thomas, co-author of one of my favorite developer handbooks The Pragmatic Programmer: From Journeyman to Master. The idea is the same as in martial arts, practice and repetition to hone the skills and lock in the patterns. One of the reasons TDD is hard for newcomers is that it is very explicit about the baby-step nature of the changes you make to the code. You simply are not allowed to add anything more than the current tests require. And for experienced developers, this is very hard to do, especially when working on real code. By using the small but precise nature of the kata to practice these skills, you can get over this roadblock in a safe way where you understand that the purpose for the baby steps is to learn the movements. Every day for two weeks? Really? Actually, I would encourage you to do one every day for the rest of your programming career, but that might be a bit extreme for some people. In my martial arts training, we warm up with forms and kata, so why not have a nice regular warm-up exercise to do while your morning coffee takes hold? My recommendation to people is to do a 30-minute kata every morning for two weeks. Then pick another one and do it every day for two weeks. Continue on. I don’t recommend people start using at work, on their actual work projects until they feel like they are ready. I can’t tell you when that will be, but you will know. It may be after one week or six, but until you feel like you are very comfortable with the rhythm of the TDD cycle, you should wait. To do otherwise is like entering into a martial arts tournament with no skills. Oh and did I mention that kata are actually fun? Using a kata to learn I still do kata all the time, although I don’t do them every day. Sometimes I do one that I have already memorized and sometimes I go hunt down a new one. Sometime I make one up and do that over and over for a few days. Most commonly, when I do a kata these days, it is to learn a new technology. As an example, a while back I wanted to learn more about SubSpec and Shouldly, which are a nice way to do BDD style development on top of I could have just played with it for five minutes, but instead I did the string calculator kata every day for a week. By doing that I actually learned a lot more about SubSpec than I would have learned otherwise. It really helped me understand the different between their Assert and Observe methods, for example. I’ve also used Kata to learn new underlying technologies. When WPF and Silverlight were first getting attention, and the Model-View-ViewModel (MVVM) approach appeared, I developed a quick kata where I create a ViewModel from scratch for a simple but imaginary system. I [...]

Visual Studio Fakes Part 2 - Shims


Let me start by saying Shims are evil. But they are evil by design. They let you do things you otherwise couldn’t do, which is very powerful. They let you do things you might not want to do, and might know you shouldn’t do, but because of the real world of software, you have to do. The Catch-22 of Refactoring to Enable Unit Testing In the first part of my series on VS11 Fakes, I reviewed Stubs, which are a simple way of creating concrete implementations of interfaces and abstract classes for use in unit tests. But sometimes it happens that you have to test a method where the dependencies can’t be simply injected via an interface. It might be that your code depends on an external system like SharePoint, or simply that the code news up or uses a concrete object inside the method, where you can’t easily replace it. The unit testing agilista have always said, “Refactor your code to make it more testable,” but therein lies the rub. I will again refer to the esteemed Martin Fowler for a quote: Refactoring is a disciplined technique for restructuring an existing body ofcode, altering its internal structure without changing its external behavior.Martin Fowler… How do you know if you are changing its external behavior? “Simple!” says the Agilista, “You know you didn’t change it as long as your unit tests still pass.” But wait… we don’t have unit tests yet for this code (that’s what we’re trying to fix), so I’m stuck… Catch-22. I have to quote Joseph Heller’s masterwork just for fun: There was only one catch and that was Catch-22, which specified that a concernfor one's safety in the face of dangers that were real and immediate was theprocess of a rational mind. Orr was crazy and could be grounded. All he had todo was ask; and as soon as he did, he would no longer be crazy and would haveto fly more missions. Orr would be crazy to fly more missions and sane if hedidn't, but if he were sane he had to fly them. If he flew them he was crazyand didn't have to; but if he didn't want to he was sane and had to. Yossarianwas moved very deeply by the absolute simplicity of this clause of Catch-22 andlet out a respectful whistle.Joseph Heller Catch-22 Chapter 5 He’s crazy if he wants to go fight, but if he says he really didn’t want to fight then he isn’t crazy and so needs to go fight. We have the same problem. We can’t test because it needs refactoring, but we can’t refactor because we don’t have tests. Shim Your Way Out of the Paradox Example 1 - To see what this really looks like, let’s look at some code (continued from the example in Part 1). "Untestable" System Under Test Code1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 namespace ShimsDemo.SystemUnderTest { public class CustomerViewModel : ViewModelBase { private Customer customer; private readonly ICustomerRepository repository; public CustomerViewModel(Customer customer, ICustomerRepository repository) { this.customer = customer; this.repository = repository; } public string Name { get { return customer.Name; } set { customer.Name = value; RaisePropertyChanged("Name"); } } public void Save() { customer.LastUpdated = DateTime.Now; // HOW DO WE TEST THIS? customer = repository.SaveOrUpdate(customer); } } } There is one minor change from[...]

How to avoid creating real tasks when unit testing async


UPDATE: Stephen Toub pointed out that in .NET 4.5 you don’t need my CreatePseudoTask() helper method. See the bottom of this post for more information. If you’ve been coding in VS11 Beta with .NET 4.5 you may have started experimenting with using async and await in your programs. You also probably noticed a lot more of the APIs you consume are starting to expose asynchonous methods using Task and Task. This technology let’s use specify that operations are long running and should be expected to not return quickly. You basically get to fire off async processes without you having to manage the threads yourself. Behind the scenes, the necessary state machine code is created and, as they say, “it just works”. I would really recommend reading all the great posts by Stephen Toub and others over on the PFX Team blog. And of course the MSDN Docs on the Task Parallel Library should be reviewed too. But did you know that in VS11 Beta you can now create async unit tests? Both MS-Test and the newest version of now support the idea of a unit test that is async, and can therefore use the await keyword to block on a call that returns a Task. One of the interesting things about this occurs when you use async with a faked interface that contains an async method. Consider the case where you have an interface that returns Task because it is expected that some or all of the implementors could be long running. Your interface definition might look like this: 1 2 3 4 public interface IDoStuff { Task LongRunningOperation(); } When you are testing a class that consumes this interface, you will want to provide a fake implementation of that method. So what is the best way to do that? (Note: I will use VS11 Fakes for this example but it really doesn’t matter.) You might write a test like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [TestMethod] public async Task TestingWithRealRunningTasks() { // Arrange var stub = new StubIDoStuff { LongRunningOperation = () => Task.Run( () => "Hello there" ); }; var sut = new SystemUnderTest(stub); // Act var result = await sut.DoSomething(); // This calls the interface stub // Assert Assert.AreEqual( "Interface said 'Hello there'", result ); } Assuming DoSomething() produces the formatted string that is expected, this test will work. But there’s a bit that is unfortunate… You actually did spin off a background thread when you called Task.Run(). You can confirm this with some well-placed breakpoints and looking at the threads. But did you need to do that in your fake object? Not really. It probably slowed your test down by a bit and really isn’t required. The System.Threading.Tasks namespace includes a class you can use to help you with these kinds of things: TaskCompletionSource (see MSDN Docs). This very cool class can be used for a lot of different things like making event-based async live in a TAP world, etc. Stephen Toub says a lot about TCS in his post The Nature of TaskCompletionSource but the part most relevant to us here is where he says: Unlike Tasks created by Task.Factory.StartNew, the Task handed out byTaskCompletionSource does not have any scheduled delegate associatedwith it. Rather, TaskCompletionSource provides methods that allow youas the developer to control the lifetime and completion of the associated Task.This includes SetResult, SetException, and SetCanceled, as well as TrySetvar[...]

Editing Octopress/Jekyll posts in Vim


Update - I was able to get backtick code blocks working much better, and made a stab at the YAML Front Matter, but it doesn’t seem to work using syntax include. See the git repo for the updated source. I use Vim as my day-to-day, non-IDE text editor. Yeah, I know everyone is in love with Notepad2, Notepad+ or whatever the new favorite on the block is. I’ve been a vi/vim guy for ages and am not gonna change. Since switching my blog to Octopress, I’ve been writing all my posts in Vim. Vim does a nice job with Markdown, but it doesn’t know anything about the other things that are often used in a Jekyll markdown file. The two big things are: It gets confused by the YAML Front Matter It can go nuts over some of the Liquid filters and tags Fortunately Vim has a nice way of letting you add new things to an existing syntax definition. You just create another syntax file and put it in the after directory in your ~/.vim directory. Then you just add the new syntax descriptors and restart Vim. For the first problem, I found a blog post by Christopher Sexton that had a nice regex match for the YAML Front Matter. He has it included in his Jekyll.vim plugin (which I don’t use, but it is pretty cool). A quick catch-all regex for Liquid tags and another for backtick code blocks and it works pretty damn well. Here’s the code: markdown.vim 1 2 3 4 5 6 7 8 " YAML front matter syntax match Comment /\%^---\_.\{-}---$/ contains=@Spell " Match Liquid Tags and Filters syntax match Statement /{[%{].*[}%]}/ " Match the Octopress Backtick Code Block line syntax match Statement /^```.*$/ I do think it would be cool if I could do a few other things: Actually use YAML syntax coloring in the Front Matter. I’d like not to have to reimplement the YAML syntax to accomplish this, but from looking at the way the HTML syntax works for javascript, I may have to. Build in understanding of the Octopress codeblock tag and disable Markdown syntax processing within it. It also has a three line ftplugin tweak to force markdown files to use expandtab and use a 3 character tabstop. Since I typically keep tabs (I don’t have expandtab in my vimrc) and since Markdown actually uses the spaces to mean things, this just works better. If you don’t like that part, just delete the file. If you want to use it, I recommend using pathogen and then clone the GitHub repository into your bundle folder. [...]

Rules of the Road (Redux)


It is so much fun to go back and look at old posts. I saw Scott Hanselman mention on Twitter that he’d recently marked his 10th anniversary as a blogger. Since I just converted all my posts from SubText to Markdown, I’ve been going through the older ones. Sometimes I find one and say “Ugh, did I really say that?” but other times I find a good one and it still resonates with me as much as it did when I originally wrote it. This is about one of those good ones. The post I found was from January 29, 2009 and was called Rules of the Road. In that post I talked about how I found these stashed away in a OneNote file from 2006, so these have been with me for a while. When I was growing up, my dad occasionally had issues with an ulcer in his duodenum. It was stress related. He was a very Type-A kind of person, in an occasionally stressful job, with all the stresses that one expects starting out a family. (You know… money, kids, etc.) It was during this time that he started using three rules to help him deal with it. He told me those rules as I became older and was in college, where there can be similar but different stresses. I didn’t really take them to heart though until I became a manager and really started to experience stress. Since then I’ve added some more to his list. The Rules Don’t stress out about things you can’t control - ignore them Don’t stress out about things you can control - fix them If you have an issue with someone (anyone), talk to them about it immediately, do not let it fester Help people who politely and sincerely ask for help Fight for what you believe in Admit when you are wrong and don’t be afraid to apologize Reserve the right to change your mind You do not have to justify saying no to someone Let’s look at each one 1. Don't stress out about things you can't control - ignore them 2. Don't stress out about things you can control - fix them The first two of my dad’s original list are really the most important of the bunch. They help you manage and control your own reactions and expectations, and can play a huge part in increasing your personal happiness. It is amazing how often people will stress out and complain about things over which they have no control. It is even more amazing how often people whine and complain about things they can control. This is related to the old adage, “Change your environment or change your environment.” Don’t like your job? Fix it or quit. You are in control over that. The same attitude can be applied to just about anything that stresses you. 3. If you have an issue with someone (anyone), talk to them about itimmediately, do not let it fester Rule #3 is really just a special case of Rule #2, but pertaining to people. But this special case is important because we often forget that our relationships and communications with our peers, colleagues, friends and family are in our control. If you have an issue with something someone said or did, burying that down in your subconscious will only make it worse. These conversations can be hard, but they are essential to maintaining good relationships with people. 4. Help people who politely and sincerely ask for help. This one is obvious to me, but particularly important on teams. Your family, friends and team should take priority over almost everything else. And when one of them comes to you with a request for help, help them. 5. Fight for what you believe in This is another special cas[...]