Subscribe: Jeff does Server & Tools Online
http://weblogs.asp.net/jeff/Rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
app  azure  code  core  data  database  don  net core  net  new  people  set  sites  stuff  things  time  user  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Jeff does Server & Tools Online

Jeff Makes Software



The software musings of Jeff Putz



 



Adventures in load testing

Sun, 25 Sep 2016 12:52:41 GMT

I'm at a point now where POP Forums is mostly ported to ASP.NET Core, and I'm super happy with it. I've done some minor tweaks here and there, but mostly it's the "old" code running in the context of the new framework. I mentioned before that my intention is not to really add a ton of features for this release, but I do want to make it capable of scaling out, which is to say run on multiple nodes. I haven't done any real sniffing on SQL performance or anything, but I noticed that I had some free, simple load testing available as part of my free-level Visual Studio Team Services account, so that seemed convenient. I want POP Forums to be capable of running across multiple nodes (scale out, as opposed to scale up, which is just faster hardware). Getting it there involves a few straight forward changes: Shared cache. v13 used in-memory cache only, but swapping it out for something off the box is easy enough. Since Redis is all the rage, and available in Azure, I picked that. Changing the SignalR backplane for the real-time client notifications of updates. When there's a new post, we want the browser to know. Turns out, Redis does pub/sub as well, so I'll add that. It looks like it's almost entirely configuration. Queued background activity. There are four background processes that run in the app, right under the web context. These are email, search indexing, scoring game calculation and session management. Except for the last of those, I had to do some proper queuing so multiple instances could pick up the next thing. It has been a solved problem for a long time, and wasn't hard to adjust. The incentive to run multiple nodes isn't strictly about scale, it's also about redundancy. Do apps fail in a bad way very often? Honestly, it hasn't happened for me ever. In the cloud, you only need redundancy for the few minutes where an instance's software is being updated. I speculate that I've encountered this at weird times (my stuff only runs on a single instance), when the app either slowed to a crawl or wasn't responsive. Writing a cache provider for Redis was pretty straight forward. I spun up to two instances in my Azure dev environment, and with some debugging info I exposed, could see requests coming from both instances. I had to turn off the request routing affinity, but even then, it seems to stick to a particular instance based on IP for awhile. More on that in a minute. I've started to go down the road of hybrid caching. There are some things that almost never change, chief among them is the "URL names," the SEO-friendly forum names that map to forum ID's appearing in the URL's, and the object graphs that describe the view and post permissions per forum. I was storing these values in the local memory cache ("long term") and sending messages via Redis' pub/sub messaging to both commit them to local cache and invalidate them. I think what I'm going to do is adopt StackOverflow's architecture of an L1/L2 cache. Cached data only crosses the wire if and when it's needed, but collectively is only fetched from the database once. The first thing I did was run some load tests against the app in a single instance, using various app service and database configurations, and all of them ran without any exceptions logged. This was using the VSTS load testing, which is kind of a blunt instrument though, at "250 users," because it doesn't appear to space out requests. The tests result in tens of thousands of requests in a single minute, which wouldn't happen for 250 actual users unless they read really fast. If I set up just "5 users," using the lowest app service and database, it results in comfortable average response times of 49ms and an RPS of 35. That's a real life equivalent of 400-ish unique users in my experience. App Service Database Average response time Requests per second S1 (1 core) B (5 DTU's) 2.5 sec 261  S2 (2 cores) S0 (10 DTU's) 2.5 sec 316  S3 (4 cores) S1 (20 DTU's) 1.5 sec 522 Honestly, these results were better than I expected. That's almost 2 million usable requests per hour, and you can go [...]



Adding a Bootstrap CSS class for validation failure in ASP.NET Core

Mon, 05 Sep 2016 20:36:32 GMT

While porting a form from POP Forums to ASP.NET Core, I was surprised to find that there is not a TagHelper version of the old HtmlHelper AddValidationClass. In the old world, you could do a field group like this, using the Bootstrap magic:
@Html.TextBoxFor(m => m.Email, new { @class = "form-control" }) @Html.ValidationMessageFor(m => m.Email)
That would render the has-error class name in the parent div when the validation failed, and the Bootstrap CSS will shade the label and the text box: There is no equivalent of this with the TagHelper bits in ASP.NET Core. Sure, you can use the old HtmlHelper, but where's the fun in that? Fortunately, this is a super-simple opportunity to write your own tag helper. I like these helpers because they look like HTML, and feel less abrupt than the @Html.Whatever style of the HtmlHelpers. You can render completely custom markup with your own tag, or you can have the framework make improvements to existing tags, like a div, in this case. I think TagHelpers are also way easier to unit test since they're full classes and not extension methods. Thinking in design terms first, imagine that we want our markup to look like this to achieve the same effect after failed validation:
What we want is for the tag helper to add the has-error class to the class attribute of the div, but only when the Email field has been marked as invalid. Here's the code I came up with. using System.Linq;using Microsoft.AspNetCore.Mvc.ModelBinding;using Microsoft.AspNetCore.Mvc.Rendering;using Microsoft.AspNetCore.Mvc.TagHelpers;using Microsoft.AspNetCore.Mvc.ViewFeatures;using Microsoft.AspNetCore.Razor.TagHelpers;namespace PopForums.Web.Areas.Forums.TagHelpers{ [HtmlTargetElement("div", Attributes = ValidationForAttributeName + "," + ValidationErrorClassName)] public class ValidationClassTagHelper : TagHelper { private const string ValidationForAttributeName = "pf-validation-for"; private const string ValidationErrorClassName = "pf-validationerror-class"; [HtmlAttributeName(ValidationForAttributeName)] public ModelExpression For { get; set; } [HtmlAttributeName(ValidationErrorClassName)] public string ValidationErrorClass { get; set; } [HtmlAttributeNotBound] [ViewContext] public ViewContext ViewContext { get; set; } public override void Process(TagHelperContext context, TagHelperOutput output) { ModelStateEntry entry; ViewContext.ViewData.ModelState.TryGetValue(For.Name, out entry); if (entry == null || !entry.Errors.Any()) return; var tagBuilder = new TagBuilder("div"); tagBuilder.AddCssClass(ValidationErrorClass); output.MergeAttributes(tagBuilder); } }} Before I go in to the details, note that your view needs an @addTagHelper directive, or you can put it in your _ViewImports.cshtml file. This directive allows you to use whatever classes derive from TagHelper it can find. In my case, it looks like this: @addTagHelper *, PopForums.Web We have to inherit from TagHelper. Then we put an attribute on the class that describes that we're targeting the div tag for modification, and we're looking for two specific attributes. The two properties are what our attribute values are mapped to. It's worth noting that because the For property is of type ModelExpression, we get nice Intellisense in Visual Studio that let's us pick a model property in the view. The ViewContext property is injected by ma[...]



No! You don't need to use ASP.NET Identity!

Wed, 24 Aug 2016 15:59:00 GMT

Going way back to, I think, .NET v3, ASP.NET had this new thing called Membership. Maybe it was a version earlier. I dunno. "Neat," I thought, I can write a provider adhering to this interface and use my existing user and auth structure to plug into this system. Then I saw that the membership and role providers each had about a bazillion (maybe quadbazillion) members to implement, and reality set in that what I already had was working just fine. Some years later, ASP.NET offered Identity, this newer thing that did sort of the same thing. It even made its way into Core. You don't need it. For real. I'm not saying that it isn't a useful piece of the framework, but you need to stop making it the default for user management. It's not hard or time consuming to build out your own system of user entities and permissions (roles, claims, etc.) as you see fit. The problem, as I see it, is that developers are confusing the act of persisting user information with authentication. I get why that may be, as Identity uses one line of code to both verify a user and sign them in (Core docs show how). But under the covers, there is code that first verifies the user/password against the database, then sets the auth cookie to indicate who the user is for future requests. You can in fact do one without the other. Why would you do that? Part of it may just be an issue of control, but for me, it's because I want to be very specific about how I structure my user data. I also don't really want to use Entity Framework in many cases (read: most things I port from older apps), and EF is part of the magic of Identity. What I've seen in a number of projects is the use of Identity mixed with a home-grown set of user domain objects and a totally separate database or persistence mechanism. If you're doing all of that plumbing anyway, you definitely don't need the additional overhead of Identity. Let's use ASP.NET Core as an example, first. In Startup, we use the Configure method to use cookie-based authentication: app.UseCookieAuthentication(new CookieAuthenticationOptions{   AuthenticationScheme = CookieAuthenticationDefaults.AuthenticationScheme,   AutomaticAuthenticate = true}); In some kind of login method, from our MVC controller, we look up the user in the code that we wrote, with whatever backing store we made, and then sign in. Let's pretend that myUser is some construct we've made up: var myUser = _myUserLookerUpperService(email, password);var claims = new List{ new Claim(ClaimTypes.Name, myUser.Name)}; var props = new AuthenticationProperties{ IsPersistent = persistCookie, ExpiresUtc = DateTime.UtcNow.AddYears(1)}; var identity = new ClaimsIdentity(claims, CookieAuthenticationDefaults.AuthenticationScheme);await HttpContext.Authentication.SignInAsync(CookieAuthenticationDefaults.AuthenticationScheme, new ClaimsPrincipal(identity), props); The code should be pretty straightforward. Whatever our domain-specific user thingy is, it's something built for us, instead of the generic thing that the Identity framework has created. We use that to construct a set of claims and authentication properties, and then use the built-in Authentication system to sign in with our newly constructed principal. This is what creates the encrypted cookie on the user's browser. It's not as magic as the Identity service, but remember that you're welcome to use any kind of schema that you want to persist user data, and that means you can query it or normalize it (if you must) against any other bits of data you have. Naturally, you may want to set up some other context, or simply verify that they're still a known-good user on each request. To do that, you can wireup middleware in the Startup's Config method (app.UseMiddleware();). Middleware doesn't use an interface (and I don't know why they chose convention over an interface), but it does expect an Invoke method to do stuff. It's here that you would look up the user based on the identity: public class MyMiddle[...]



POP Forums roadmap and ASP.NET Core

Fri, 12 Aug 2016 20:55:00 GMT

The volatility over ASP.NET Core made me pause (twice) since last fall when it came to porting POP Forums to the new platform. Every new release broke things to the point of frustration, and the RC2 reboot was hard. With SignalR falling a bit behind, it made things worse. But alas, that seems to be mostly behind us, and I've started committing stuff to a branch again, a little at a time, to run on the new ASP.NET. It's not at all usable, and every change seems to invite more changes, but I'm starting to see the potential and love for ASP.NET Core. With the experimentation behind me, and a reasonable amount of stability, I'll start populating the issue log with tasks to do. I wanted to explicitly create a roadmap so I can stay focused. I do not have a timeline in mind, which is probably fine because I don't think people are flocking just yet to the new framework. The truth is that porting is not straight forward, and a lot of stuff can break. With that in mind, here's the plan... POP Forums v14: A straight port to ASP.NET Core, no new feature work. Right now I'm developing against .NET v4.6, but the goal is to go all Core. The two dependencies I'm using from the "old" framework are for image manipulation (the WPF stuff), and calling an SMTP server. Fortunately, it looks like ImageProcessorCore and MailKit will, respectively solve those problems and allow me to go all Core and cross-platformy. I may even try writing code on a Mac with JetBrains' Project Rider, if I'm feeling sassy. Get out of the single-node problem. The app today depends on in-process caching to be fast, which doesn't work if you have to spin up to additional nodes. This requires some changes: Use external caching, probably Redis. I've done a little testing, and it's super easy. It's also an easy to use service in Azure, where my stuff lives anyway. Revisit background processing. A number of "services" run in the background in the context of the web app (sessions, emailer, search indexer, point/award calculation). These can probably run multi-node, or maybe as an option run stand-alone, but at the very least needs some queueing mechanisms or something. You don't want two instances trying to send the same email at the same time. Solvable problem, just need to think it out. Need to figure out configuration for SignalR backplanes. Knowing there is a new post before you hit "reply" is important, and to this day my favorite feature. Developer friendly. The documentation has not been well kept, and as long as I could figure out how to get it working in my sites, good enough. I don't want to be that guy anymore. Secondary objective: See if the app itself can be packaged in a way that you can hitch it on to an existing ASP.NET Core solution. Secondary objective: Refactor as time allows. There is still stuff written in the WebForms days that lingers, like 2002 linger, and it's not my best work. That's it. It sounds simple enough, but it's a ton of work. I've ported much of the base library and UI stuff, but I'm still sorting through views and the conversion of things like HtmlHelpers to TagHelpers and using view components where it makes sense. Adapting slowly to claims based auth and middleware, but still not going to take any dependencies on Identity or EF. I've yet to port unit tests, but part of that is because I'm trying to really break stuff out into places that make sense. Part of that was getting MVC and SignalR stuff out of the base library, for example. POP Forums v15: Modernize the UI. I'm not sure what this means yet, but you can do things in a web app that were not possible before, due to the rich world of open source software. I don't think it means going to a full on single-page application (SPA) model, because of the effects around indexability and SEO, but there is potential for new things. The classic "UBB" style has some long-standing conventions that work, so the idea is not to get radical, but keep text at the center of the world. Optimize for perf[...]



ASP.NET Core middleware to measure request processing time

Mon, 30 May 2016 15:34:00 GMT

One of the things that ASP.NET Core promises is a faster, streamlined processing pipeline. Naturally, you start to wonder how fast your pages render before being spit out into the tubes. With the fantastic ability to chain middleware in the pipeline (think HttpModules and HttpHandlers, only without the bazillion events), it's super easy to wrap most of the processing in a timer.

In high level terms, a request comes into the app and it is then "seen" by whatever middleware you have configured in the Startup class. If you've fired up a new project, you've already seen some of the included middleware configured in the Configure method using extension methods like app.UseMvc(), for example. These helper methods are likely calling something like app.UseMiddleware(), where T is some type that includes an Invoke() method to do stuff, and a delegate to hand off processing to the next middleware. (This is all well documented, so I won't get deep here.)

It makes sense, then, that you can create your own middleware, and register it first in Configure to capture the time it takes for the entire process. Even better, you can do it inline without having to create a class for this simple output. It goes like this in the Startup class:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	app.Use(async (context, next) =>
	{
		var sw = new Stopwatch();
		sw.Start();
		await next.Invoke();
		sw.Stop();
		await context.Response.WriteAsync(String.Format("", sw.ElapsedMilliseconds));
	});

	// all the other middlware configured here, including UseStaticFiles() and UseMvc()

Hopefully that's straight forward. Each middleware calls the next, so while they execute in order up until they reach those next calls, they execute in reverse order after that for any code that falls after the next call in each middleware. The important part here is that it injects a comment at the end of the stream with the number of milliseconds it took to execute the rest of the middleware pipeline, which likely includes all of the MVC bits. So with that in mind, a few caveats:

  • This is not well tested. I can see it write out some (really fast) execution times on pages, but I'm not aware of any unintended consequences.
  • I would rather write to the headers collection, but unfortunately it's read-only by the time the last line fires. I'm sure someone more clever than me can figure out a way to do this.
  • I'm not sure what the consequences of doing this are on non-HTML output. I don't see the time being appended to static files, for example, but I'm not sure what it does to images that are streamed out from the MVC framework, for example (i.e., images taken from a database blob and written out from an action method).
  • This doesn't measure whatever overhead is involved in creating and managing the pipeline itself. I haven't looked very deeply into this to know what's going on, but as the point for ASP.NET Core is to ditch a lot of the crusties that came with old ASP.NET, I imagine it's not significant.

Have fun, and if you have ideas about how to improve this, do let me know!




Collaborate and contribute vs. order taking and kingdom guarding

Tue, 19 Apr 2016 15:09:00 GMT

I was chatting with a coworker yesterday about the various kinds of IT work environments that we've been in. It was largely in the context of the kind of influence we have, depending on our career stage. I was making the point that it's easier to "sneak in" the right things when you get further along, a perk that I've enjoyed a bit in recent years. There is definitely a difference in the flavor of environments that are out there, ranging from the full-on IT-as-innovator shop to the stodgy old heads-down status quo.

When I say "IT," really I mean the software end of things. The hardware and infrastructure side of things is a different beast, though this is slowly changing as more companies adopt a devops world of virtualized everything and stop buying racks full of silicon that they'll eventually throw away. On the software side of things, there tends to be two m.o.'s at play, and it's striking how infrequently the shops fall somewhere in between (at least in my experience).

The first is the world where IT is a collaborator and contributor to the business. Good ideas come from everywhere. The IT people are engaged and understand the context of the business, so everyone from a junior developer up through management is able to identify an opportunity and suggest it to the other parts of the business. Those other segments embrace the ideas, and together the ideas are refined to turn a drip of awesomesauce into a steady flow. These are the companies that end up doing truly great things.

The other end of the spectrum is where IT is relegated to a customer service organization. Its intention is to take orders as they come along, and guard the kingdoms that they've set up. The other business segments aren't interested in getting ideas or innovation from IT, and IT is happy to just keep its head down until called upon. It will tell the business "no!" because of "security!" and other reasons that sustain its kingdoms. People are hired not for their ability, but because they can conform to this model and not ask too many questions.

I don't have to explain which scenario is better for any business, but the cultural leap to get from passive IT to full collaborator is not an easy one to make. The old stereotypes of the socially challenged guys in the basement who set up your printer are hard to shake. It isn't just the perceptions outside of the basement either, because there's a self-fulfilling prophecy among many software workers that, "This is where we belong." But consider this: The software people of the world are indexing the world's information (Google), making social interaction more global (Facebook), teaching cars to drive themselves (Tesla)... why would you not want the same kinds of forces that are changing the world changing your business? If you don't enable this culture, your competition will.




Another Azure outage, and why regional failover isn't straight forward

Sun, 10 Apr 2016 01:30:00 GMT

[This is a repost from my personal blog.] It's been a rough month for my sites in the East US Azure region. On March 16, a network issue made it all fall down for about four hours. Today, on April 9, just a few weeks later, I've endured what might be the longest down time I've ever had in the 18 years I've had sites online, including the time an ISP had to move my aging server and fight a fire in the data center. It will probably be awhile until we see a root cause analysis (RCA), but the initial notes referred to a memory issue with storage. The sites were down for around 7.5 hours this time, and the rolling up time over the last 30 days is now down to 98.5%. That's not very good. Previous outages include the four hours on 3/16/16, two hours on 3/17/15, and two hours for the epic, multi-region failure on 11/20/14. Fortunately, none of these involved data loss, which is the thing that cloud services should achieve the most. I moved in to Azure about two years ago. Here's the thing, I know firmly that CoasterBuzz and PointBuzz don't support life or launch rockets. The ad revenue lost is really not that much, which you could probably guess considering how much I complain about it. Still, the sites are an important time waster for a lot of people, and I've spent a lot of years trying to get some solid Google juice. When the sites are down, it harms the reputation of them for users and for search engine bots that are trying to figure out how important the sites are. There is a cost, even if it isn't financial. My costs are lower, while my flexibility and toolbox are better since moving to Azure. No question about it. The hassle free and inexpensive nature of SQL databases in particular are huge, especially the backups and ability to roll back to previous points in time through the log. That said, the down time for all but the broken 2014 incident were regional issues, and the only way to get around that is to have everything duplicated and on standby in another region. If the only issue was the apps themselves, this would be super easy to handle with Azure Traffic Manager. Sites go down, boom, they route to a different region. Where things get less obvious is when you have a database failure. Today's failure appears to have been caused by a failure of the underlying storage for the databases, so the apps returned 500 errors. In this case, ATM would presumably reroute traffic to my stand by region, where I would have the sites ready to go and pointing to the failover database, also in the other region. In today's case, I'm not sure if that would have worked. The documentation says that the database failover won't happen until the primary database is listed as being "degraded," but for the entire 7.5 hours today, it was listed in the portal as being "online." It most certainly was not. The secondary database won't come online until the other fails. I assume I could manually force it, but I'm not sure. I'm also not sure what happens when the original comes back online in terms of synchronization, and designating it back as the primary. And what if the apps went down but the databases were fine? Traffic would roll to the other region, but wouldn't be able to connect to the local databases because they're not failed over (and no, I don't want to connect to a database across the country). So really, there are two issues here. The first is the cost, which even for my little enterprise would add up a bit over the course of a year. The secondary databases in another region would add around $25 per month. Backup sites would cost another $75 a month. ATM cost would be negligible. An extra hundred bucks seems like an awful lot for what I'm trying to do. I did see a good hack suggestion that says you can put the backup sites in free mode, and manually scale up if you need them, then point ATM at them. The second problem is that the automation is far from perfect. In the sites [...]



Developers: You have to share and mentor others, for the sake of our profession

Sun, 03 Apr 2016 02:45:13 GMT

Again this year, I did a couple of talks at Orlando Code Camp, the amazingly awesome free mini-conference that our local user group, ONETUG, has been putting on for a decade now. I am again fascinated by the vibrancy of our community, and all of the people who volunteer their time to share knowledge. It's humbling and amazing. (My decks are on GitHub, by the way. I won't rehash the mentoring and career development stuff here.)

One of the talks that I did was about mentoring developers. It's something I'm passionate about, and I think it solves a problem that keeps getting worse. Our profession doesn't have enough people to do the work, and the experience and skill level in the pool that we do have isn't high enough. And if you dispense with the egos and hyperbole often associated with some segment of developers, you start to see the pattern that our work has more in common with classic trades than it does a truly academic pursuit. In other words, it's more like learning how to be an electrician or carpenter than it is learning to be a doctor. You need experienced people to teach you how to do the work, hands on.

With that in mind, mentoring has to be a part of our daily routine when we're in senior positions and when we're managers (assuming that we're managers who code). I didn't mention this in the talk, but I flatly reject the idea that we don't have time to do this. It's built in to everything we do, whether it's formal code reviews, pairing activities or informal talk about life. You can read all of the blog posts and StackOverflow answers in the world, but unless you have contextual, interactive opportunities with other humans, you won't gain the experience that you need to improve your skills.

That's why it's vitally important to get involved with something like a local code camp or user group. A strong technology market doesn't get strong just by having the right companies move in. It's ultimately composed of people that make work happen. If you're in any career stage where you feel like you have something to share, to pass on your experience, do it. The amount of work there is to do keeps growing, and the number of people who can do it isn't keeping pace. There's no need to protect your knowledge. Share it. Our profession depends on it.




EF7 RC, navigation properties and lazy loading

Mon, 28 Dec 2015 03:00:14 GMT

Jumping into the brave new world of .NET as open source has been an experience, to say the least. The feedback loop is tight, things change quickly, and it's definitely a different world than the days of big bang releases. I think it's a great thing, but admittedly, it makes the early adoption thing a lot harder. Sometimes I find myself disappointed (as with the deferred release of SignalR 3, for example). Still, the scope of the frameworks and the number of people working on them is impressive, and I look forward to this new world.

Entity Framework 7 is a total do-over, and to that end, it is different in a great many ways. I wanted to write about navigation properties and the way they work now (as of RC1), because it's an important departure from previous versions. In short, the lazy loading does not happen for navigation properties. The current road map says it's a high priority feature, as the feedback around GitHub is pretty, uh, "vibrant." Still, I imagine that since they planned to be feature complete for the RC, I'm not going to hold my breath.

I don't think this is as big of a deal as people are making it out to be, as it just requires a slightly different approach. You have to think about what you want when you build the actual query. For example, say you have a table called Posts with a navigation property called User that maps to a Users table. All of the conventions still work... there are no annotations or special model builder options to configure. Your classes can look like this:

public class Post
{
   public int PostID {get;set;}
   public int UserID {get;set;}
   public virtual User {get;set;}
}

public class User
{
   public int UserID {get;set;}
   public virtual ICollection {get;set;}
}

Again, the dbcontext is conventional:

public class BlogContext : DbContext
{
   public DbSet Posts { get; set; }
   public DbSet Users { get; set; }
}

The big change is in the query. If you get some posts and want those User properties hydrated, you need to specify that up front. Your query will look something like this:

var posts = _context.Posts.Include(x => x.User);

If you look at the underlying SQL that's generated, you'll see a join as expected.

It's a little different, but I don't think it's different bad. Diving in and just now using it, this threw me off for an hour. Hopefully it will prevent you from doing the same.




Beating localization into submission on ASP.NET 5

Sat, 17 Oct 2015 19:44:00 GMT

While I'm still enthusiastic about ASP.NET 5, there are things that I run into that seem way harder than they should be. Beta software, no docs, I know... I need to keep my expectations in check. Localization is just such a thing. The samples currently in GitHub along side the code lack context, especially relative to what we're all used to.

I'm in the slow process of porting and refactoring POP Forums to the new bits. I've been fortunate that people have contributed a bunch of translations, now totalling six languages. It's the standard resx files embedded in a class library. Using the resources has not been straight forward. Understanding the new and preferred way to do stuff isn't even remotely obvious even with the blog posts around it, which I think make too many assumptions about what one might know. I'm not suggesting any of it is "wrong," just ranting because I couldn't figure it out.

So here's how I got there, putting resx files in a portable class library (PCL) and using them from the web project.

  1. Drop the resx files into your PCL project, including the .designer.cs file. I have this:
    (image)
  2. Right-click on Resources.resx and click "run custom tool." This will regenerate the designer file, and do it wrong.
  3. Open the designer file. The UI for the resx files won't let me select internal or public, so I did a manual replace all from "internal" to "public." I also had to change the namespace (it made it PopForums.Resources, making the full type name PopForums.Resources.Resources). Then in the static ResourceManager member, you'll have to also change the type name to match.
  4. Open you your project.json file. In order to embed the resx files, you'll need to let the compiler know. Add these two parts:
     "resource": ["Resources/*.resx"],
     "namedResource": { "PopForums.Resources" : "Resources/Resources.resx" }

    The first pair tells the compiler where to look for the files to embed, and the second I'm not sure. I found a half-clear anecdote about using named resources, and I apparently needed this to make it work.

  5. In the web project, I could use the stuff already in my views PopForums.Resources.MarkAllForumsRead for a button label.

I'm sure this isn't right, if only for the fact that adding anything to the resource files will require you to again manually change a bunch of stuff. I'm not even sure if you can use the other languages (I haven't tried). At this point, I'm less interested in the right way and more interested in moving forward with other stuff. I can come back to this, but along with things like configuration, data access and such, localization has to be at least hacked into submission so I can keep working. Hopefully this helps someone out for their own short-term fix.




ASP.NET 5/MVC 6/.NET Core is going to be a tough adjustment for some

Sat, 17 Oct 2015 03:16:00 GMT

Wow, time flies when work is keeping you extra busy. I haven't done a lot of work on POP Forums or done any speaking gigs since spring. I feel like a bit of a slacker! Fortunately, there's something new to talk about. Unless you've been living under a rock for the past year, you know that .NET and ASP.NET (and MVC, obviously) are undergoing a pretty massive overhaul, all before your eyes as a series of open source projects. It's pretty crazy to see, and it's awesome. When I started to see MVC as a much better way to build web apps in 2009, it was still kind of stuck on top of ASP.NET, which was primarily engineered to support WebForms. In retrospect, WebForms was something of an ugly abstraction to make the web work like VB6. The more we started using Javascript to do stuff, the more obvious that became. I would suggest that ASP.NET 5 is very much a bold departure from that old model. As ASP.NET 5 is now in its final beta (8!) before the first RC, I figured now would be a good time to be a bit more serious about understanding it. As excited as I am about it, it's starting to feel clear to me that it's not something that you're going to be that anxious to port existing apps to. It's very different, and it breaks a lot of things. Naturally, my first attempt at understanding it is to port the forum app, because nothing says real life like, uh, a real app. Doing greenfield work is going to go a lot more smoothly, that's for sure. In any case, after spending a few evenings with it here and there, here are some of my initial impressions: Adopting the OWIN pipeline model makes a lot of sense, and I don't think anyone is really going to miss the event model of the previous ASP.NET. HttpHandlers were already made largely obsolete by MVC, and middleware takes the place of HttpModules. System.Web is gone, but the things that it did can be found in a number of other places. You just have to look around. Sometimes, a lot. With FormsAuthentication going away (as best as I can tell) in favor of claims-based auth, all of the samples you find depend on Identity. This annoys me to no end. Remember the Membership API? It was awful. Identity is better, but it still tries to be everything to everyone, and in the end, I'm not generally interested in using it, or writing my own replacement for the EF provider. I'd like code samples and docs that cut through the magic of the middleware and let me do what I want with the claims (especially for the 3rd party auth). You will encounter strangely broken things. For example, data readers apparently now make the Close() method private. It will get called when you Dispose(). Did you use HttpRuntime.Cache back in the day? Me too. It's the reason why the forum is so fast. Unfortunately, you can't do that in multi-node web apps, but if you want to, there is an in-memory cache package, as well as one for Redis. That's awesome. Dependency injection is built in, and that's cool and all, but obviously I have no idea how good it is. I'm a big fan of StructureMap, but for now I'm using the built-in containers, which you have to manually configure. That's why I love SM... the convention-based config. It sure seems like you can package an entire app or area as a NuGet package. I have no idea how, but it's something I'm already thinking about. Old school resx files for resources work, but it's apparently not the way going forward. I haven't read up on it, but hope it's not a huge change. The new configuration scheme is so much better than the old, and infinitely more flexible. Sure, you should still be building strongly typed wrappers around your configuration, but using JSON over XML is very nice. ViewComponents and tag helpers are a great evolution when it comes to chunking out pieces of UI. Remember how you would typically have to [...]



My last big Azure problem: The unexplained CPU spike

Sat, 02 May 2015 17:01:41 GMT

Over the last year and change, I've written a number of posts about the challenges around moving my sites (primarily CoasterBuzz and PointBuzz) to Azure. These are moderately busy sites that can for the most part live in a small websites, er, app service, instance, with SQL Azure S0 for the data, some storage here and there, queues, and some experimental use of search, Redis and what not. I know I've done some complaining, but only because I really want one of the core premises of the cloud, namely for all of the maintenance and backend stuff to be abstracted away, to deliver without me thinking about it. Mostly, it has.

But I've got one more thing that I see every now and then, maybe once every two months, that crops up that I can't explain. Specifically, one of the sites will start showing a ton of CPU time, which of course slows the other one as well. To be perfectly honest, sure, it could be something in my code, but I would expect to see some other indicator that had some correlation. Memory usage doesn't increase, there's no associated traffic spike, and it's so infrequent that it seems weird that anything I'm doing would cause it. The resolution is simple enough: restart the app with the CPU spike.

(image)

I don't want to just blame the platform, but I'm not sure where else to look at this point. A memory dump doesn't show anything out of the ordinary.




One year with sites in Azure

Sat, 11 Apr 2015 01:01:00 GMT

My sites (CoasterBuzz and PointBuzz) have now been in Azure for about a year. It has been interesting, for sure. From a high altitude perspective, I can tell you that I've saved money when compared to renting a dedicated server, but there have been challenges in terms of reliability. Because of my early work using Azure when I was working at Microsoft (MSDN stuff), I looked forward to the day when I could move my apps there. Back then, in 2010, all we had were worker/web roles, storage and SQL Azure was, uh, not what I'd call robust. A year ago, the economics made sense, the feature set amazing, and here I am. In terms of cost, the combination of resources I use have been 25% cheaper that the dedicated server (which I paid $167 for each month at SoftLayer). Mind you, I'm talking only about monthly consumption, because the dedicated server involved licensing for different things over the years, most notably a SQL license. So that makes me generally happy, especially because ad revenue has really sucked for the last year. Performance has been a mixed bag, but appears to be getting better for a number of reasons. Both of the big sites are now running on the newest everything, including POP Forums, which makes a difference in terms of computing performance. But I also think that they've changed some things in the background, or some magic has happened where I'm on less stressed hardware... who knows. Pages are now being served in less than 50 ms in some cases, as viewed from Azure's Virginia monitoring point. More importantly, Google is now fetching pages in around 100 ms, which is good for search juice. I had fast rendering on the dedicated hardware, but SoftLayer's connectivity seemed to have more latency. In my case, I'm running the sites on a standard small instance (1 core, 1.75 gigs of RAM), at $74 per month. My apps can't do multi-instance, because they use in-memory cache (HttpRuntime.Cache, for the nerds). I have done some experimentation with Redis cache, and I could probably set it up in production if I had to because the caching code is easily swapped out (use your dependency injection, kids). I just don't want to spend twice as much. Yet. I would love to have the simple redundancy. Here's the problem though... I'm always battling memory pressure. The two apps together only consumer about 600 MB on average, but apparently there can be a good gig of overhead from the OS running the VM. And those nifty diagnostic bits (the Kudu stuff on the "scm" subdomains) can be a big memory hog as well. Probably most importantly, you should turn off your other deployment slots if you're not using them. As such, I only keep my staging sites running when I'm in the process of deploying new code. Frustratingly, the monitoring for websites, er, Web Apps now, doesn't look at the underlying virtual machine as a unit. Well, they sort of do. If you're willing to use the "preview" portal, which has been preview for more than a year now, you can take an awkward path to the instance via one of the individual sites, er, apps, and see the memory and CPU graphs. It's not at all contextual, because it appears in a box called "quotas" but unless you know better, you don't know that it's for the sum total of all apps on the VM. And that's all assuming the preview portal works at all. Over the last year I've seen enough stuff not work, and click in vain, to just leave it alone. That frustration was at its worst in the first few months, when I couldn't understand the memory pressure problem because of the lack of context. I get it now, and outside of the outages, things are pretty stable. SQL Azure performance has been totally predictable from the start. I'm running standard S0 databases for both sites, and my[...]



10 Things someone will pay for later

Sat, 28 Mar 2015 17:33:05 GMT

This is based on a talk I did at the 2015 Orlando Code Camp, but it strikes me as something worth blogging about. This is not meant to be an exhaustive list of anti-patterns, but just a general category of things that end up being a huge pain at some point. Avoid these at all costs! In no particular order... 1. Poor or excessive project organization We've all been here, right? There are people who show up to the party and believe, "We're doing agile, there is no organization!" Of course, you know better, that Agile (big "A") is anything but free of process. Still others build kingdoms, throw up walls and generally get in the way of collaboration. While teams certainly make things happen, you still need a product owner who ultimately can drive feature decisions and understand what the stakeholders need. I still think it's the excess that gets in the way the most, however. On one hand, there are people who want to specify every little detail of an application before even a single line of code is written. You know how this goes... the outcome is never what they thought they wanted. At the other end, you have people who believe we need 40 people to sign-off on something to get it into production. And no, SOx regulations don't require that, so don't be suckered into that line of thinking. 2. Naming stuff with names that no one but you understand Stop using Hungrian notation. Your fancy IDE is pretty good at doing mouse-overs and discovering types. Use descriptive names for stuff that make it totally obvious what you're using. Conventions are also good, so if you have some group of things that generate stuff, use the word "generator" in the names (like "UnicornGenerator"). Be specific, too. It's much better to have something called "BostonTerrierGroomer" than "DogGroomer." 3. Dependencies on hard system boundaries I would agrue that this makes me rage more than most things. If the smaller parts of your software can't be tested or worked on without a database, file system or other external service, you're doing it wrong. Dependency injection should in the general sense be pretty familiar to most people by now, and I ask you, please, to embrace it and understand how to use it. Part of the reason it's critical is that you don't want to create barriers that make it difficult for a new developer to ramp up and start working on stuff. Of course the system needs those dependencies when it's running for real, but sometimes they do break. How do you deal with that? Instrumentation! Believe it or not, it's something that even big enterprises don't do well, and you can seriously consider this a career opportunity for hero status. It sounds obvious, but when you can dashboard the connectivity, latency, queue lengths, etc., between system pieces, it's easier to diagnose problems. Even better, start passing around context between systems and make them record it. Not sure which of the dozen systems that acts on your data did something naughty to it? Leave a paper trail! 4. Mixing real-time and pre-calculated business rules to output data This is not entirely obvious. Imagine if you will, an app that dispays your checking account balance. If it's truly a ledger, then the balance is the thing on the last line. That's pre-calculated data. Of course, you could also find the same number by taking the starting balance and then applying every adjustment to it. Where you get into trouble is a system that does both under certain circumstances. All of a sudden, a change in code alters the downstream outcome of the value you're expecting. I've seen this happen so many times because there was no deliberate choice to either calculate a value in real-time, or calculate it ahead of time. It's super important[...]



POP Forums v13.0.0 released for ASP.NET MVC!

Sun, 15 Feb 2015 00:26:00 GMT

Yes, it has been entirely too long since the last release, but v13 is here, and it's a big one! This is the first release here on GitHub.

Get the bits: https://github.com/POPWorldMedia/POPForums/releases

Demo: http://popforums.com/Forums

Upgrading?

Run the PopForums12.xto13.0.sql script against your database, which is found in the PopForums.Data.SqlSingleWebServer project.

What's new?

  • Completely revised UI uses Bootstrap, replaces separate mobile views.
  • New Q&A style forums.
  • Preview your posts for formatting.
  • Social logins using OWIN 2.x.
  • StructureMap replaces Ninject for dependency injection.
  • Admins can permanently delete a topic.
  • Facebook and Twitter links added to profiles.
  • IP ban works on partial matches.
  • Bug: Initial user creation didn't salt passwords (Codeplex 131)
  • Bug: Replies not triggering reindex for topics (#4).
  • Bug: LastReadService often called from Post action of ForumController without user, throws null ref (#1).
  • Bug: Reply and quote buttons appear in posts even when the topic is closed (#8).
  • Bug: When email verify is on, changing email does not set IsApproved to false (#10).
  • Bug: Image controller throws when Googlebot sends a weird If-Modified-Since header (#13).
  • Bug: Reply or new topic can be added to archived forum via direct POST outside of UI (#15).
  • Bug: Multiple entries to last forum and topic view tables causing exception when reading values into dictionary (#17).
  • Experimental: Support for multiple instances in Azure with shared Redis cache (not production ready).

Known issues

None.




Repositories gone wild

Thu, 29 Jan 2015 22:45:00 GMT

One of the very beneficial side effects of the rise of MVC in the ASP.NET world is that people started to think a lot more about separating concerns. By association, it brought along more awareness around unit testing, and that was good too. This was also the time that ORM's started to become a little more popular. The .Net world was getting more sophisticated at various levels of skill. That was a good thing. But something I do remember vividly was that a lot of tutorials were using a generic repository pattern. In other words, there was some kind of contextual data object that was used to query the data from a number of other places upstream. This was certainly better than finding data access code in the code-behind of an ASP.NET Webforms page, certainly. Those generic repositories still have some value, for example, when paired with UI elements (namely the grids made by various component makers), though that flexibility certainly comes with a price. So what price is that? Well, there are quite a few negatives that I've found. In no particular order: The aforementioned UI elements don't know anything about what's indexed or how the underlying data is arranged. Separation of concerns, right? Sure, but there's a bigger context about application design that leads to trouble. Sure, you can sort or filter results, but the grid doesn't know how to do it efficiently, through an ORM. ORM's are already leaky abstractions, meaning you need to know something about the underlying implementation to make them work the way you want. The leaks come in many forms, needing to understand transaction scope, change tracking, when stuff is persisted, etc. The resulting interface isn't really an enforceable contract, it's just a thin wrapper around the ORM. Testing ends up being mostly about matching the query syntax around the generic repository. If you're using repositories with very specific methods for specific actions, you don't need to worry about that querying syntax. This marries you to the data persistence mechanism. I know what you're thinking, no one ever changes the database, but I'll get to that in a moment. There are a lot of advantages to repositories that are domain specific in their contract. So for example, you have a repo for customer data, with methods like "GetCustomer" or "UpdateCustomerAddress." You have to think about how you work with context and transactions, but I think that's a problem solved by dependency injection. A lot of people will debate over whether or not you use data transfer objects (DTO's) or entities, or whatever, and that's fine, but my preference is to not rely on entity change tracking to decide what you persist. In other words, I prefer a method takes a couple of parameters like "customerID" and "address" and not read an entity, change it, then save it. That process requires knowledge of the underlying persistence layer or data access framework. I know not everyone will agree with me, but I don't care for unit testing data access code either. Part of it is that it makes the tests slow, but mostly it's because I have no desire to test ORM's (which are presumably well tested), and if I'm using straight SQL, if it's at all complex, I'm already doing a lot of testing trying to get it right. Do I end up with bugs in the data access code this way? Sometimes, sure. It's a trade-off. But then there's that whole thing about not coupling your app to the persistence mechanism. The usual response to that is that no one ever changes the kind of database they're using, and six or seven years ago, I would have agreed with that. But a funny thing happened when we started using new caching tools, load balancin[...]



The great Azure outage of 2014

Thu, 20 Nov 2014 18:50:00 GMT

We had some downtime on Tuesday night for our sites, about two hours or so. On one hand, November is the slowest month for the sites anyway, but on the flip side, we pushed a new version of PointBuzz that I wanted to monitor, and I did post a few photos from IAAPA that were worthy of discussion. It doesn't matter either way, because the sites were down and there was nothing I can do about it because of a serious failure in protocol with Microsoft's Azure platform. I'm going to try and be constructive here. I'll start by talking about the old days of dedicated hardware. Back in the day, if you wanted to have software running on the Internet, you rented servers. Maybe you had virtual networks between machines, but you still had physical and specific hardware you were running stuff on. If you wanted redundancy, you paid a lot more for it. I switched to the cloud last summer, after about 16 years in different hosting situations. At one point I had a T-1 and servers at my house (at a grand per month, believe it or not that was the cheapest solution). Big data centers and cheap bandwidth eventually became normal, and most of that time I was spending $200 or less per month. Still, as a developer, it still required me to spend a lot of time on things that I didn't care about, like patching software, maintaining backups, configuration tasks, etc. It also meant that I would encounter some very vanilla failures, like hard disks going bad or some routing problem. Indeed, for many years I was at SoftLayer, which is now owned by IBM and was formerly called The Planet. There was usually one instance of downtime every other year. I had a hard drive failure once, a router's configuration broke in a big way, and one time there was even a fire in the data center. Oh, and one time I was down about five hours as they physically moved my aging server between locations (I didn't feel like upgrading... I was getting a good deal). In every case, either support tickets were automatically generated by their monitoring system, or I initiated them (in the case of the drive failure). There was a human I could contact and I knew someone was looking into it. I don't like downtime, but I accept that it will happen sometimes. I'm cool with that. In the case of SoftLayer, I was always in the loop and understood what was going on. With this week's Azure outage, that was so far from the case that it was inexcusable. They eventually wrote up an explanation about what happened. Basically they did a widespread rollout of an "improvement" that had a bug, even though they insist that their own protocol prohibits this. But it was really the communication failure that frustrated most people. Like I said, I think most people can get over a technical failure, not liking it, but dealing with it. What we got was vague Twitter posts about what "may" affect customers, and a dashboard that was completely useless. It said "it's all good" when it clearly wasn't. Not only that, but if you then describe that there's a problem with blob storage but declare websites and VM's as all green, even though they depend on storage, you're doing it wrong. Not all customers would know that. If a dependency is down, then that service is down too. The support situation is also frustrating. Basically, there is no support unless you have a billing issue or you pay for it. Think about that for a minute. If something bad happens beyond your control, you have no recourse unless you pay for it. Even cable companies have better support than that (though not by much). Microsoft has to do better. I think what people really wanted to hear was, "Yeah, we messed up r[...]



Bootstrap in POP Forums, and why I resisted

Thu, 25 Sep 2014 01:01:08 GMT

I haven't been writing much lately, in part because I spent a good portion of my free time in the last week overhauling the POP Forums UI to use the Bootstrap framework. You can see what it looks like on the demo site. It took me a long time to cave and do this, but I think I had pretty good reasoning. The forum app has always been at the core of my personal site projects, chief among them CoasterBuzz. I'm a little meticulous about markup and CSS. I hate having too much of it. I hated using jQuery UI because it felt like bloat. Grid frameworks always seemed to require more markup (and they still do) and global CSS almost always causes trouble with other stuff when you drop it in. All prior experiments with these things failed, and let's be honest... a two-column layout is a nut that has long since been cracked and requires very little markup or CSS. In order to bend the forums a little to match the rest of the site it would be contained in, there's incentive to keep it light in terms of CSS. I mostly achieved this. The number of overriding classes was not huge, and more global stuff around common elements mostly worked. You could basically drop the forum inside of a div and be on your way. It even worked pretty well on tablets, and in my last significant set of tweaks back in 2012, doing a responsive design didn't feel like a priority. And then there was the mobile experience. One of the trade-offs for responsive is that you typically end up with more markup and CSS instead of less, so I wasn't ready to fully embrace that. Some people still weren't on LTE networks either, so I was bit conscious. Since the UI rendering was done by ASP.NET MVC, it was easy to strip down the UI to mobile-specific views, and it only took a few hours to do it for the entire app, as well as CoasterBuzz. I also didn't force it, and users could choose mobile with a link at the bottom of the page. In fact, you can see it today on CB if you scroll to the bottom. It's super fast, super light weight and concentrates on the reading of text. You can debate the merits of different views vs. responsive all day, but in this case it did exactly what I wanted with very little effort. Around the time of that CB release in 2012, Twitter open sourced Bootstrap and it was starting to get popular. Early last year, it seemed like the web in general was starting to adopt its own look and feel, largely due to Bootstrap. It's like the web as an OS started to have a UI style guide. I was finally starting to think seriously about it because its use was so widespread, and they were even baking it into the MVC project templates. Then they released v3, and it broke a lot of stuff. That threw me back into caution mode. Since that time, several things have motivated me to reconsider Bootstrap. Again there's the bigger issue of adoption, which has become pretty epic in scope. Then there's the large number of themes, which are available in great numbers, and range from free to cheap. It isn't hard to make your own either. I've also been dissatisfied with using mobile ad formats, because they don't pay, and the regular ones aren't well suited to mobile specific UI. After two years of phone upgrade cycles, more people have more bandwidth and faster connections. On top of that, the devices themselves are faster at rendering. Oh, and most importantly, Bootstrap itself has very clearly matured. That's pretty compelling. So I made the revisions and committed them. The admin pages haven't been updated, but I'll get there. I feel like this gives me a good fresh start to make more changes and[...]



I moved my Web sites to Azure. You won't believe what happened next!

Fri, 29 Aug 2014 01:53:43 GMT

TL;DR: I eventually saved money. I wrote about the migration of my sites, which is mostly CoasterBuzz and PointBuzz, from a dedicated server to the various Azure services. I also wrote about the daily operation of the sites after the move. I reluctantly wrote about the pain I was experiencing, too. What I haven't really talked about is the cost. Certainly moving to managed services and getting out of the business of feeding and caring for hardware is a plus, but the economics didn't work out for the longest time. That frustrated me, because when I worked at Microsoft in 2010 and 2011, I loved the platform despite its quirks. The history of hosting started with a site on a shared service that I paid nearly $50/month for back in 1998. It went up to a dedicated server at more than $650, and then they threatened to boot me for bandwidth, so I started paying a grand a month for a T-1 to my house, plus the cost of hardware. Eventually the dedicated servers came down again, and for years were right around $200. The one I had the last three years was $167. That was the target. Let me first say that there is some benefit to paying a little more. While you won't get the same amount of hardware (or the equivalent virtual resources) and bandwidth, you are getting a ton of redundancy for "free," and I think that's a hugely overlooked part of the value proposition. For example, your databases in SQL Azure exist physically in three places, and the cost of maintaining and setting that up yourself is enormous. Still, I wanted to spend less instead of more, because market forces being what they are, it can only get cheaper. Here's my service mix: Azure Web Sites, 1 small standard instance Several SQL Azure databases, two of which are well over 2 gigs in size (both on the new Standard S1 service tier) Miscellaneous blob storage accounts and queues A free SendGrid account My spend went like this: Month 1: $204 Month 2: $176 Month 3: $143 So after two and a half months of messing around and making mistakes, I'm finally to a place where I'm beating the dedicated server spend. Combined with the stability after all of the issues I wrote about previously, this makes me happy. I don't expect the spend to increase going forward, but you might be curious to know how it went down. During the first month and a half, only the old web/business tiers were available for SQL Azure. The pricing on these didn't make a lot of sense, because they were based on database size instead of performance. Think about that for a minute... a tiny database that had massive use cost less than a big one that was used very little. The CoasterBuzz database, around 9 gigs, was going to cost around $40. Under the new pricing, it was only $20. That was preview pricing, but as it turns out, the final pricing will be $30 for the same performance, or $15 for a little less performance. There ended up being another complication when I moved to the new pricing tiers. They were priced that any instance of a database, spun up for even a minute, incurred a full day's charge. I don't know if it was a technical limitation or what, but it was a terrible idea. You see, when you do an automated export of the database, which I was doing periodically (this was before the self-service restore came along), you incurred the cost of an entire day's charge for that database. Fortunately, starting next week, they're going to hourly pricing starting next month. I also believe there were some price reductions on the Web sites instances, but I'm not sure. There was a reductio[...]



You have a people problem, not a technology problem

Wed, 20 Aug 2014 18:25:32 GMT

[This is a repost from my personal blog.] Stop me if you've heard this one before. Things are going poorly in your world of software development, and someone makes a suggestion. "If we just use [framework or technology here], everything will be awesome and we'll cure cancer!" I like new and shiny things, and I like to experiment with stuff. I really do. But every time I hear something like the above statement, it's like nails on a chalkboard. You know, most of the NoSQL arguments over the last few years sound like that. It's not that the technology isn't useful or doesn't have a place, but when I'm looking at it from a business standpoint, I have a perfectly good database system, that happens to be relational, that could do the same thing, is installed on my servers, will scale just fine for the use case, and I employ people who already know how to use it. Maybe I have something in production that uses it wrong, but that isn't a technology problem. I'm sure we're all guilty of this at various points in our career. We've all walked into situations where there is an existing code base, and we're eager to rewrite it all using the new hotness. It's true, there are often great new alternatives that you could use, but I find it very rare that the technologies in play are inadequate, they're just poorly used. That kind of thing happens because of inexperience, poor process, transient consultants or some combination of all of those things. The poor implementation is only a part of the people problem. There is a big layer of failure often caused by process, which is, you know, implemented by people. For example (this is real life, from a previous job), you've come up with this idea of processing events in almost-real-time by queuing them and then handing them off to various arbitrary pieces of code to process, a service bus of sorts. So you look at your toolbox and say, "Well, our servers all run Windows, so MSMQ will be adequate for the queue job." Shortly thereafter, your infrastructure people are like, "No, we can't install that, sorry." And then your release people are like, "Oh, this is a big change, we can't do this." You bang your head against the wall, because all of this kingdom building, throw-it-over-the-wall, lack of collaboration is 100% people problems, not technology. Suggesting some other technology doesn't solve the problem, because it will manifest itself again in some other way. What do you do about this? Change itself isn't that hard (if you really believe in the Agile Manifesto), but people changing is hard. If you have the authority, you remove the people who can't change. If you don't, then you have to endure a slower process of politicking to get your way. It's slow, but it works. You convince people, one at a time, to see things in a way that removes friction. Get enough people on board, and momentum carries you along so that everyone has to follow (or get off the boat). I knew a manager at Microsoft who was exceptionally good at this, and his career since has largely been to convince teams that there was a better way. At a more in-the-weeds level, you get people engaged beyond code. One of the weakest skills people in the software development profession have is connection with the underlying business. Mentor them so that they understand. Explain why the tool you use is adequate when used in the right context and will save the business time and money, compared to a different technology that has more cost associated with learning new skills, licenses or whatever. It's like the urge to buy a new[...]