Last Build Date: Tue, 7 Jul 2015 06:51:38 -0400Copyright: Copyright 2015
Sat, 16 Mar 2013 04:28:19 -0400
src="http://player.vimeo.com/video/61840275" width="500" height="281" frameborder="0">
Thu, 14 Mar 2013 06:37:57 -0400
Look, in the real world, things die. This turns out to actually be quite important. We've all seen the sci-fi movie where humanity figures out how to be immortal: it's always really, really bad news.
The problem is that when a species achieves some form of immortality, evolution stops. Dead in its tracks. Because death is a big part of the engine that keeps it going.
When you buy a physical product, you don't expect it to last forever: indeed, you know well that it can't. It's absolutely correct to whinge about planned obsolescence -- those infernal products that somehow manage to break one day after their warranty expires are the work of Satan. But we have no expectation that physical things be immortal.
Interestingly, in the few cases where they effectively are, we either fail to notice, or are puzzled by it. A few weeks ago, a friend came out of the loo, still rubbing her hands dry, and said "It's weird that toilets don't evolve, isn't it? Think about it: they've been the same basic solution for how many hundreds of years now?" I cocked my head at her and said, "Why is that weird? How often have you had a toilet die (ie. break irreparably)? As a species, the individuals are more or less immortal. Of course they never change.
"But when the product is made out of thoughtstuff, when it's immaterial (ie. software), suddenly, expectations change. Suddenly, it's the norm that software should live forever (or at least as long as we do).
And that's doing -- and has been doing -- terrible harm to us all. The software that your insurance policy lives on is probably running on a mainframe, was written in COBOL, and has been alive for >50 years. Same thing (more or less) for the software that your bank account lives inside of, the software that runs your ATM, the software that makes the trains run, the airplanes fly, the power plant work, and countless other such minor, trivial examples.
And that sucks. That software sucks. It's like those terrible, deathless monsters in that sci-fi movie. It's really bad.
Of course, civilians often believe that the pace of software evolution is already too fast -- dizzying, overwhelming. That misapprehension is a relative one -- what civilians can't know is how much faster it could have been. If only software could die.
As it should.
It's kind of ironic, on some deep level, that I saw the news of G-Reader's death at more or less the same moment in time that I saw Harry Stamps obit (HT: ). The contrast could hardly be starker. Of course it's right and proper to grieve the death of a beloved thing. But if you think the solution to that is to prevent the death -- well, then you're well on your way to that sci-fi dystopia. And that's a bad place to live.
Indeed, my larger point is: we've allowed that dystopia to already happen, in the software ecosystem. And it's already a bad place to live: much, much worse than it could have been.
RIP, Google Reader. Many people obviously loved you. I'm glad you're dead.
Wed, 13 Mar 2013 04:33:25 -0400
In December, I gave a talk to a group of young new hires / graduates at my company, under the auspices of the Young Management Consultants Association (yes, the YMCA -- don't ask me). They shot a low-fi video (which cuts the last 30 seconds or so off at the end, as well), so as a record of the event it's not perfect. But the talk got a lot of good feedback, and I keep getting asked about it since, so I figured I'd throw it out where I can point to it; email a link.
src="http://www.youtube.com/embed/qimnUzkfbYY" width="560" height="315" frameborder="0">
And here are the accompanying slides…
src="http://www.slideshare.net/slideshow/embed_code/17156621" style="border: 1px solid #CCC; border-width: 1px 1px 0; margin-bottom: 5px;" width="427" height="356" scrolling="no" marginwidth="0" marginheight="0" frameborder="0">
Fri, 16 Dec 2011 11:05:08 -0500
For those of you who are going, “CSC? Really?” — yes, really. And they’ve been quietly growing far faster than any other VMware-based provider, so for all you vendors out there, if they’re not on your competitive radar screen, they should be.
Sat, 31 Jul 2010 11:42:35 -0400
I had the tremendous privilege of speaking at O'Reilly's OSCON as part of the Cloud Summit, organised by my friend (and as of recently, colleague) Simon Wardley. I had 20 minutes to talk, and was given the first slot of the morning, at 8:45 in the morning. Since, all other things being equal, I rarely rise before nine in the morning, this was a bit of a challenge for me.
The entire summit was brilliant, and I encourage you to check out the slides and videos from the other speakers as well. In particular, I liked JP Rangaswami's talk, and the panels on open standards, open APIs and the closing panel on the future of the cloud.
Finally, the always brilliant Simon did the closing keynote on Friday, and his talk is a must-see.
Sun, 27 Jun 2010 13:23:41 -0400
This is the talk I did in Milan last month at the International Forum on Enterprise 2.0 about CSC's experience with E2.0 within our own organisation. I had the last slot before lunch, and we were on Mediteranean time (meaning -- we were running chronically late ;) ), so I had to do a 35 minute talk in 20 minutes. So I talked even faster than I normally do, which is close to warp speed anyway.
Wed, 7 Apr 2010 09:34:25 -0400Well over a year ago, in a conversation with Alexis Richardson, I came up with a catchy acronym to articulate an idea that I had been kicking around as a simple way to respond to all of the Sturm und Drang in the press and the blogosphere about "lock-in", "data portability" and reliability of cloud computing providers. I said -- "You know what, mate, done properly, it would be like a RAID setup -- it would be an array of cloud providers. Umm, yeah, it would be RAIC! 'Redundant Array of Independent Cloud providers'". Alexis, as I recall, burst out laughing, and said something like "You better trademark that, Mark. That's great." A few weeks later, I sat down, and wrote a blog post to try to describe the idea in some detail. That post has since become the most popular post on my blog, ever, but that's largely because people hotlink to the image of Wile E. Coyote that I included in it, apparently, and has little to do with the rest of the content. And as it turned out, it's good I didn't try to follow Alexis' advice about that trademark stuff, as an angry commenter let me know (quite correctly) that he was the first to publish the term. :D Despite all that, the term has gotten some traction. I encounter it now from time to time in other people's writings, and I get a lot of questions about it. By and large, the questions are a consequence of my own laziness bandwidth constraints. That first blog post was never intended to stand alone -- I meant to follow it up with one or more posts, expanding on the idea and explaining what I meant in more detail. Since I never got around to doing that, I can't blame anyone but myself if people are left confused, or have questions. A few months ago, I was asked by a CSC colleague in Holland if I could contribute a chapter to a book that is being published (in Dutch) there in the coming year on cloud computing. I said, "Sure, I'll write about RAIC!" And so I did. What follows is the English-language input I provided. Redundant Arrays of Independent Cloud computing providers – RAIC At one point, many years ago, during the early period of what has since come to be known as the “client/server revolution”, the reliability of hard disk drives in mainframe systems was a powerful sales argument for manufacturers of such systems. Defending their markets from new and aggressive competitors, they made the argument that hard drives used by their competitors were too unreliable (by comparison) and offered unacceptable performance for mission critical work. This argument was helped immensely by the fact that it was, by and large, true. In 1988, David A. Patterson, Garth A. Gibson and Randy Katz at the University of California, Berkeley, published a paper entitled "A Case for Redundant Arrays of Inexpensive Disks (RAID)" at the SIGMOD (Association of Computing Machinery’s Special Interest Group on the Management Of Data) Conference . This paper laid the foundation for a relatively simple, but extraordinarily effective response to the limitations of disk storage in low cost, client server systems. Simple queuing theory mathematics demonstrate that an array of service providers, working in parallel, provide higher bandwidth than any equivalent single service provider can. But low cost disks being used in client server systems seemed unsuitable for such parallel arrays, because they were of relatively low quality, and correspondingly unreliable. The RAID idea was to combine N disks in a redundant manner. This would compensate for the inherent unreliability of the hardware, and allow systems to exploit parallelism for higher bandwidth. The Berkley paper went on to outline sev[...]
Tue, 29 Sep 2009 10:09:00 -0400Well, it's the day after the first CloudCamp in Frankfurt. All in all, it was a solid success, I thought. We had roughly 140 people attending, which is a remarkable turn-out for a first 'Camp, and I was quite pleased with the mix of people that we got. One of the strengths of the CloudCamp idea -- perhaps its greatest strength -- is the way it mixes up various "demographic" groups and puts them in chairs next to each other at the same event. Where else do you see startup hackers in clever t-shirts rubbing elbows with executive VPs from Deutsche Bank? We had a lot of that vibe going on last night. I was also startled (and impressed) by the depth and breadth of knowledge in the audience. Germany gets a less-than-enthusiastic rap from the (largely Anglo-Saxon) cloud computing vendors as a slow-moving, risk-averse market. That stereotype, like any such, may have some kernel of truth in it, but you wouldn't have known it from looking at about half our audience last night. We had a fair amount of people there who understood the domain, and had plenty of hands-on experience working with cloud computing. Sure, there was a contingent of newbies, eager to learn and understand, but there were also some serious experts in attendance. The downside of expert audiences is that they can be quite demanding, and I feel personally like I let that demographic down, somewhat. The Lightning Talks were not what I wanted them to be. We had two problems -- we had too many talks, and we had too many sales pitches. The former problem was a combination of miscalculating things, as well as a beginner's eagerness to pack as much content as possible into the time we had. The latter problem, which in my view was the more serious one, was a consequence of too much trust, and too little oversight. We accepted talks from vendors who were CloudCamp veterans more-or-less sight unseen, and that was a mistake. Our assumption here was that CloudCamp veterans needed less oversight, because they knew the rules, and could be trusted to abide by them. That didn't work out so well, unfortunately. And it was particularly frustrating to me for two reasons: one, I've been quick to criticise other 'Camps in the past (as an audience member) for failing to get this right, so ha-ha, the joke was on me. And two, we were quite aggressive in turning some people down, or forcing others to modify their talk. By letting the other sales pitches slip through, we betrayed those other people's trust in us, and I feel awful about that. By all accounts that I heard, the latter phases of the 'Camp (Unpanel and workshops) worked well, and were very successful. I know I sure had a hard time getting the "Business" and "Security and Legal" workshops to wrap up -- those were conversations that could have gone on longer, and that's always a good sign. Things that I screwed up or otherwise wished had been different: I didn't realise in time that Tim Cole wasn't wired up with a microphone. That wasn't a problem for the 'Camp audience itself -- Tim's an excellent speaker, and he did fine projecting without a mike. But the mikes weren't really for the local audience -- they were to ensure that the video footage that we taped of the Lightning Talks had a soundtrack. Because I didn't realise the consequences of not getting Tim wired up in time, we effectively lost the video footage of his excellent talk. Ouch. It was too hot. The room we chose for the main talks (which has elsewhere been both criticised and complimented for the oddity of its layout) wasn't well ventilated, and it got brutally warm very quickly. By the middle of the Lightning Talk phase, every time a speaker made a reference to this or that technology or idea in cloud computing being "hot", I cringed. :) Not for the first time at a German event, I found myself wishing the audience would leap up and participate mor[...]
Mon, 31 Aug 2009 04:51:10 -0400I got asked last week to answer the following question, from a journalist looking for some quotable fodder for a piece she is writing: I'm looking for industry specialists who can write and email to me (succinct) explanations on the difference between cloud computing and traditional outsourcing, and the likely impact of the former on the latter. I said, "Oh, that's easy, except for the 'succint' bit." This is how I responded: Labour arbitrage is a sourcing technology. Like any technology, it is bounded by the context in which it was conceived, and the challenges that it was designed to solve. Cloud computing is also a sourcing technology. But it has emerged in a different context from labour arbitrage, and it is designed to solve different challenges -- some of the differences are obvious, some quite subtle. Labour arbitrage manifests attributes that reflect both the good it is transacting for and the historical context in which it emerged. Both factors lead it to favour long-term, binding contractual relationships between provider and consumer. Labour is a capital expense, and a costly one. Maintaining it (training and whatnot) and acquiring it requires a significant upfront investment -- this investment needs to be recouped, which drives the economic structures of labour arbitrage contracts. It is a scarcity model. Cloud computing, on the other hand, emerged from a model of abundance -- the core of cloud computing's business model is the exploitation of excess capacity. Finding a way to monetise that excess capacity depended on the emergence of a market for (more or less) packaged services, as opposed to raw labour. As soon as that market -- which clearly exists -- had reached a certain level of maturity, having gone through several precursor cycles (ASP, early SaaS providers), the business models of providers like Amazon and Google became feasible. Because of its different context, the contractual terms under which transactions in the cloud computing model happen reflect abundance, rather than scarcity. They do not require up-front, capital investments, nor do they require long-term contractual committments. Both of these things may add value to certain kinds of cloud computing transactions, but they are not a requirement for the model to work, as they are in the labour arbitrage model. This enables cloud computing providers to offer a sourcing relationship to consumers that is almost entirely an operating expense, and one which can be precisely fit to the demand curve (elasticity). This enables businesses to achieve the effective equivalent of successful lean manufacturing practices with regard to services -- cloud computing holds the promise of eliminating the equivalent of excess inventory in outsourcing relationships. This is similar to the idea that there may be a "non-linear" service model, where the cost structures inherent in a labour arbitrage based model don't fit well. See the following, for example: As we have seen from the explosion of outsourcing, there is a lot of work to be found in this model, but it comes at a price—the need to constantly hire and train ever-larger workforces, and the need to combat rising costs and increasing commoditization and price pressure at the customer interface. Down this path, once you have hired 10,000 people, you now need to keep that machine fed—which means you must find any work you can to feed the machine. It is really hard for such a company to transform itself from a linear, people driven business to one that isn’t. Of course it can be done, but it will take sustained, visionary leadership from the top against all sorts of near-term growth pressure to pull it off. This point of view is meeting with some scepticism and debate within my employe[...]
Tue, 25 Aug 2009 11:52:13 -0400
Great stuff from Stu Charlton, Elastra's Chief Sorcerer. Looks like a talk I would have liked to hear him give...
Sat, 22 Aug 2009 09:55:30 -0400
Nice writeup by Robin Bloor on a project that I am involved with at the Day Job (tm):
So what CSC has done is to build a fully fledged analytics capability that looks at the historical pattern of payments, analyzes the outstanding debt and balances everything – including balancing the cash management across multiple currencies and multiple accounting systems. You need to see the software to appreciate how smart it is, but in the end I guess, it’s just well engineered analytics. The business impact is a cash management capability that is dramatically more efficient than is currently deployed by companies and which rapidly pays for itself, by collapsing the need for short term loans.
Read the rest of Robin's impressions here.
Tue, 18 Aug 2009 09:53:33 -0400
I love zombies.(image)
So I reacted to the following scholarly paper with a little happy dance.
We model a zombie attack, using biological assumptions based on popular zombie movies. We introduce a basic model for zombie infection, determine equilibria and their stability, and illustrate the outcome with numerical solutions. We then reﬁne the model to introduce a latent period of zombiﬁcation, whereby humans are infected, but not infectious, before becoming undead. We then modify the model to include the effects of possible quarantine or a cure. Finally, we examine the impact of regular, impulsive reductions in the number of zombies and derive conditions under which eradication can occur. We show that only quick, aggressive attacks can stave off the doomsday scenario: the collapse of society as zombies overtake us all.
This is just wonderful. Wonderful. Right up there with the discovery of "zombie haiku":
My rigor mortis
Is mainly why I'm slower
And the severed foot
Sun, 16 Aug 2009 11:16:46 -0400I've attended a number of CloudCamps, both in London and Berlin, and have consistently found them to be extremely high value, productive events. Lots of signal, relatively little noise, and perhaps most importantly, fantastic networking opportunities. So, in the spirit of 'Camps everywhere, of all kinds, I felt it was about time I pitched in and helped organise one local to me. As anyone who has ever organised an event would have told me (had I been clever enough to ask), it's a helluva lot of work. I started kicking the idea around in the spring of last year, and now, in the fall, it is coming to fruition. Fortunately, I found help -- three co-conspirators, without whom nothing would have ever happened. Hans-Christian Boos and Roland Judas of Arago, Antonio Agudo of CloudAngels and I have been quietly beavering away at getting from idea to reality for a couple of months. A huge shout out is due to the guys from Arago, in particular, who are not only handling a lot of the logistical details, but are also the primary financial sponsors of the event. And the global team of CloudCamp orgnaisers, led by Dave Nielsen, but also the loose team in London that knocks it out of the park on a routine basis, have been invaluably and unflaggingly helpful to us. So now we're set. We've got a venue, we've got a date, we're ready to go. What we now need are three things: More sponsors. We've got some fantastic sponsors lined up, and we're days away from publicly announcing them, and pimping their brands everywhere we can think of. But the nature of these events is such that there are never enough sponsors. So if you or your organisation would be interested in sponsoring us, drop us a line, and we'll give you the details. Speakers. Consider this a formal "call for speakers". More informally, bring it on! Anyone can speak at a CloudCamp, not just the sponsors. We choose talks on the basis of perceived relevance and quality. As with any CloudCamp, there is only one rule: no sales pitches. No marketing, no selling. I'm going to be aggressive about this -- I'd rather have fewer talks, but higher quality, than the other way around. There are two categories of talk: Lightning Talks, which last five minutes, and an opening "keynote", which will run 15 minutes. We'll be brutal about holding people to the time limits, again in fine CloudCamp tradition, and be warned: I will cut you off in mid-sentence if need be. If none of that scares you of, and you've got something to say to a Frankfurt crowd interested in cloud -- have at it! Publicity. We need to get a buzz going for the event. So I'd be in your debt if you could take a second and link to this blog post, tweet about it, whatever -- pass it around. Anything you can do to help us get the word out will be greatly appreciated. Below is the text of the form letter we've been spamming potential sponsors and speakers with. At the bottom, you'll find a link to the event, and the contact details of the other organisers and myself. And whether you're a potential sponsor, or speaker, or not -- if you're going to be in the greater Rhein/Main area on 28 September, come on out to CloudCamp. I guarantee it will be time well spent. The Event CloudCamp Frankfurt will be a knockout event -- framed by a spectacular venue, and filled with meaty content. As always for a CloudCamp, it will be focussed totally on high tech networking and information exchange, but we also want to add a bit of a framework in order for all participants to get the most out of the event. You will find the lighting talks, unpanel and public plenum you are already used to, as well as a short welcome address and a keynote. But in addition to that, we have defined three "tracks", whi[...]
Thu, 13 Aug 2009 08:51:08 -0400
Susan Scrupski, aka @ITSInsider, has called the 2.0 Adoption Council into being. Taking a casual idea floated one morning over a cup of coffee at the Enterprise 2.0 conference in Boston between Susan, a colleague from Alcatel and myself, and running with it, Susan has almost overnight created one of the most interesting and vibrant groups I've ever had the pleasure of being a part of. Here's her call to action:
Listening to customers during the conference, as well as culling the data that has been coming in from various surveys, I’ve decided the time is right to launch a community for “Internal 2.0 Evangelists.” As I’ve been a 2.0 Evangelist for the broader sector (and I thought my job was difficult), I realized the job of the internal evangelist is far, far more difficult. These folks toggle between fighting the good fight every day and then slipping uneasily into a sort of DMZ where they can peek out into the broader community for support and the rejuvenation they need to go on fighting another day. It’s often a thankless job with no clear roadmap for advancement, yet the majority of them do it because they believe in the principles of the 2.0 movement. I celebrate them!
(that's me, standing on the right of the picture on Susan's blog, and directly to my right, seated, is CSC's Sherri Hartlen-Neely)
There's a public-facing blog, a Facebook page, and a Twitter stream to follow, if you're interested. Members are using some other Enterprise 2.0 tools to communicate and collaborate amongst themselves. Here's a logo collection -- it's already outdated, notably missing my pals from SAP, EMC and many others that have already joined in the few days since Susan made this picture:
I think this is a big deal, and I think it says a lot that CSC is right out in the "leaders" quadrant, so to speak, on this issue. As a friend of mine from SAP put it, upon getting invited to join, "this members list reads like a Who's Who of Enterprise 2.0!" I'm excited by the opportunity to share knowledge with these folks, and learn from them even more, and I'm excited by the opportunity I see for CSC in this space.
Mon, 4 May 2009 17:19:10 -0400
This is just so gobsmacking awesome. TED is just cool -- there's no denying it.
Margaret Wertheim leads a project to re-create the creatures of the coral reefs using a crochet technique invented by a mathematician -- celebrating the amazements of the reef, and deep-diving into the hyperbolic geometry underlying coral creation.
Sun, 12 Apr 2009 05:58:22 -0400
This got a chuckle of recognition / agreement out of me. I don't agree with every word -- for example, only a week to incubate an idea? Get a grip. But "Laugh at perfection. It's boring and keeps you from being done." -- yeah. That's right on my wavelength. In fact, it's the jest underlying the name of this blog.
Fri, 10 Apr 2009 11:58:05 -0400
I blog in more than one place. I write about mostly tech stuff here -- programming languages and cloud computing, for example. On the Enterprise2Open blog, I occasionally write about "soft"-er stuff, like management and enterprisey cultural issues. I usually try to refrain from cross posting: if I've written something for one blog, my feeling is that I should leave it to stand (or fall) in that context. But this one feels worth calling out to the broadest possible audience, to me, so I'm breaking that rule here.
I've written a post on the Enterprise2Open blog called "It's the economy, stupid", where I attack some of the underlying, implicit, and -- in my view -- flat out wrong assumptions that are often (usually?) in play in any given discussion about Enterprise 2.0 initiatives / technologies, and return on investment (ROI). Here's a quote:
If you think that ROI is a matter of getting some numbers into an Excel spreadsheet, then you are probably building a house on a foundation of economic quicksand. It will likely sink, and that would be unfortunate.
If you're reading this blog, and don't read, or aren't aware of my stuff on the E2Open blog, I am hereby suggesting that you give this post a read. As is typical for me, I expect it to annoy a large swath of people. If that includes you, please comment on the E2Open blog (rather than here), and give me the opportunity to annoy you even more? Thanks.
Thu, 26 Mar 2009 10:04:21 -0400
Here's my 5-minute lightning talk from the recent CloudCamp in London.
Fri, 6 Mar 2009 17:16:49 -0500I love languages. At university, I studied English literature, not some geeky thing like engineering or computer silence. For a long time, I thought writing novels was going to be my destiny. At some point, I realised that I don't write human languages well enough to make a living at it, so I suppose I did the next best thing: I found a way to make a living writing in non-human languages (ie. programming languages), and inflicting incredibly long blog posts on you. I've found that one thing that many language lovers have in common is a deep appreciation for profanity. People who really dig words love to curse. I certainly do. Bad words are the best. I use them appropriately, and not profligately, or merely to shock -- they are useful tools, with an excellent signal to entertainment ratio. I spent the last two days in Hannover at the big German IT conference, CeBIT. Talking, and listening to other people do the same. On Thursday, I spoke on a panel about cloud computing at the Webciety area of the conference. And I did some cursing. "Webciety" was the slogan for the theme of this year's CeBIT, and they had the entire rear right corner of Hall 6 dedicated to presenters tailored to the topic. In addition, they set up a stage, and over the course of the week, held a wide variety of public talks, panels and presentations there. The talk on cloud computing was one of these. The cloud computing panel @ Webciety, as seen from Werner Vogel's mobile It was a fun talk, if too brief. Tim Cole, the moderator, did a pretty good job. Particularly entertaining was a monitor at our feet, on which we could see the #webciety and #webciety09 hashtag streams on Twitter going by -- this was effectively a public backchannel. Lots of the comments were negative, particularly the German language ones (we were speaking in English). Stuff like "this is nothing new", "IBM's been doing timesharing since the 70's", "Sun invented the 'network is the computer' slogan a decade ago - see how that turned out?" IOW, the usual stuff you get from otherwise well-informed IT people when they're just beginning to get their heads around cloud computing. :P It was fun to be able to see and react to some of that feedback in real time, and I certainly tried to do so. In one particular instance I missed an opportunity -- a commenter was bemoaning the fact that the panel was humourless, and asked for "more jokes please". As it happened, the conversation at that moment was about the reactions of traditional data centre folk to cloud computing, and I missed my chance to use Pat Kerpan's (CTO of CohesiveFT) excellent crack about "server huggers". :) However, I did get a chance to go off on a rant about the risks and drawbacks of proprietary styles of PaaS, and I used the recent example of Coghead's demise to illustrate my point. In the process of doing so, I spoke with enthusiasm of Coghead "going tits up", and how that left its customers essentially in the unfortunate position of being "shit out of luck". Now, I believe strongly that both of those usages of (mild) profanity were a) useful and b) true. But I nevertheless managed to shock (and probably annoy) a number of people, apparently. I think that's funny*. People who get their britches in an uproar over profanity amuse me. So those reactions were fine with me. And it certainly made for a great conversation starter -- I was besieged for the rest of my visit to CeBIT by random passerby, who would suddenly come up to me and say something like "Dude, I saw your talk on cloud computing -- 'tits up!' That was a[...]
Fri, 6 Mar 2009 16:55:31 -0500I led a workshop on cloud computing at the Future of Web Apps (FOWA) conference in Miami last week. The workshops were all on Monday -- each workshop was a half day affair. The conference proper was on Tuesday. Carsonfied, the organisers, paid my way there and back, put me up in the most excellent Mondrian hotel in Miami Beach, and paid for the workshop. As a speaker, I got to go to the conference proper on Tuesday as well. I'd never been to one of the FOWA conferences before. I actually paid for a ticket to the last FOWA in London in 2008, but at the last minute had a conflict, and couldn't go. So this was my first FOWA. In general, I had a good time, and liked the conference. But the theme that emerged for me was nevertheless quite clear: cross pollination is a good thing, and there is still too little of it in the IT industry. My workshop Before I talk about what my impressions of the overall conference were, a few words about my workshop. It was sold out, which not all the workshops were, but that reflects the tremendous level of interest in the topic of cloud computing far more than it says anything about me. Nevertheless, it was a variable in an overall equation that had me concerned: My workshop was in the afternoon; IOW, post-lunch We were in Miami; IOW, it would be (and was) warm The logistics had been unclear. In particular, Wifi for the participants had only been arranged at the last minute. I was forced to assume that workshop participants would have no Internet access, so the workshop would be all show and tell, with no way to directly try things out in the Cloud The material is not exactly the stuff of a JJ Abrams TV show I am not Steve Yegge; IOW, making dry technical stuff seem as amusing and interesting as stand up comedy is not one of my talents When I got to the venue, I found my worst fears confirmed: a fairly small room, with no windows, and the air conditioning was holding the room at a tolerable temperature with just me in the room. "Sold out" meant 40 people, which would max out the seating in the room, and add a considerable amount of heat So, when you feed all those variables into a formula, what came out was this: I was worried that people would wilt in the afternoon, or even doze off. That didn't happen. And the AC coped admirably with having 40+ people crammed in such a small space -- the room didn't get unpleasantly warm at all. I got a fantastic crowd, of interested and interesting people, and they not only stayed awake, but we had an engaged and interesting conversation about cloud computing throughout the workshop. It was a successful workshop, and all of the feedback I got, either directly or via Twitter, was positive. I learned a lot about what web app developers are thinking (and worrying) about regarding cloud computing -- some of what I learned was really great stuff. For example, Pete Bernardo (@petebernardo) shared with us the fascinating nugget that Amazon's DNS host names for EC2 images are blacklisted by big spam services (like Spamhaus). Thus, if you intend to send email from EC2 images, you will almost certainly need to map an Elastic IP address to an external DNS host name of your own, or your mails will vanish into various spam catchers. That's probably a good thing to do anyway, but this problem gives it a certain urgency. The spam angle makes a lot of sense -- if I was a spammer, I'd be all over EC2 to send my mails. But I had never thought of it before, nor have I ever needed to send emails from my own EC2 images, so it was news to me. [...]
Sun, 22 Feb 2009 21:50:33 -0500
Here are the slides (all 275 of 'em) that I developed for the workshop on cloud computing at FOWA Miami 09. This is introductory material -- so if you're a cloud guru already, there may not be anything new for you to see here; move right along. And it's intended for an audience of web app developers, so there is no enterprisey slant, and it is tech, not business oriented. This is enough material for 2.5 to 3 solid hours of blabbering here, and I still don't feel I did more than scratch the surface. Perhaps cloud computing is like the youthful universe -- it's expanding at an inflationary rate. ;) Anyway, as usual with my slides, there are plenty of speaker's notes, so I suggest viewing it, if it interests you, on SlideShare directly (where you can also download the original deck).
UPDATE: the upload to SlideShare was borked in some way. In trying to fix it, I made things worse (and got the slide notes out of synch with the slides). So I deleted the original version and replaced it with one of the same name. If you had already favorited it or commented or whatever, that was lost as well. I re-uploaded the slides, but the download link is still borked. I've sent a note to SlideShare support -- if and when it gets fixed, I'll update here again. Sorry!
Mon, 9 Feb 2009 05:52:51 -0500
And if some of the stuff I talk about here (Gödel, Russell, Fred Dretske's famous question) is old hat to you, congratulations. I make a point of prodding and poking the people that I meet in Fortune 1000 enterprises who are actually building the systems that run those businesses, and I can assure you -- the result is depressing.
(PS. There's speaker's notes associated with some of the slides in this deck, so if it interests you, I encourage you to click through to SlideShare, and view it (with Notes) there, or download the original PPT file from there.)
Thu, 5 Feb 2009 13:20:54 -0500
This is demented, but funny. I like when the dude clambers out from under the DC floor -- nice touch.
Thu, 5 Feb 2009 11:39:13 -0500
Interesting new community site / aggregator blog from the smart folks behind Social Media Today. They build those sites using Wordframe, AFAIK, and they're slick. Don't know much about Wordframe, but just on the basis of the Social Media LLC sites, it appears to be pretty cool.
Wed, 28 Jan 2009 15:29:57 -0500"Back in the day..." Don't you hate such phrases? I do, or at least, I used to. As I get older, however, I find myself using them more often, strangely enough, and my antipathy is being forced to subside. Here's an example where I would like to apply such a phrase: when I was working at Deutsche Bank, a loooong time ago, we used to snicker, in a self-pitying sort of way, at the acronym "RAID". If you go follow that link, you will see that this is widely accepted to mean "Redundant Array of Independent Disks", but to its credit, the Wikipedia article does acknowledge that the Thought Police of Political Correctness had been by to visit, and makes a passing reference to what the acronym really means: "Redundant Array of Inexpensive Disks". Why does that matter? Well, you need to know what the acronym really means to understand why a handful of pale, geeky sys admins at Deutsche Bank, 15 years ago, were feeling sorry for themselves. See, those pale geeks were responsible, among other things, for uncrating, installing, and taking tender care of any number of RAID boxen. And they got to see the invoices for them, too. And those boxen were anything but "inexpensive", by our standards. They cost more than a month's pay, in many cases. And those pale geeks would have loved to have had one, but there was just no way their wives were going to allow them to spend a month's pay on such a thing. The thing is, though, that from the perspective of an entity like Deutsche Bank, they were inexpensive: dirt cheap, even. So what? Well, therein, methinks, lies the enterprise answer to the apparently insoluble problem of data portability in the cloud. Figure 1 Confused? Well, if so, bear with me. Data portability, with regard to the cloud computing hype wave, is one of those terms that's near the frothing crest -- the bloodiest part of the wave's bleeding edge. There are several things that people worry about: Lock in: if I store my data with you, will it be stored in such a way that makes it hard for me to take it back and store it somewhere else? Will you even allow me to take it back? Data mashups: if you store my data in some particular format, will I be able to easily mix it with data in other formats, probably from other providers? Reliability: if I store my data with you, will you guarantee to me that I will always have access to it? The laws of physics: what do I do after I have amassed a certain amount of data with you? If I have stored petabytes of data with you, is it even conceivable that I could ever "take it back"? Given the bandwidth of the pipe you provide to my data (and this is limited, if nothing else, by the speed of light), how long would it take me to pull my data back out? Is it reasonable to assume that I would ever be able to pay that price? And so on. But you know what? I think all of these concerns are just plain silly. I think there's a blindingly obvious way to sidestep every single one of them, and I also think it is the only sensible way for an enterprise to plan on using the cloud. Figured it out? Wait. Here's a picture. Figure 2 Still not seeing it? No worries -- here's another picture: Figure 3 Got it now? It's simple, really. The acronym that serves as the title of this post -- RAIC (pronounced "rake") -- stands for "Redundant Array of Independent Cloud providers". And it's a way of using the Cloud that makes most concerns about da[...]
Fri, 9 Jan 2009 08:54:45 -0500
I've been accepted to do a workshop on cloud computing at the Future Of Web Apps (FOWA) Miami 09 conference. The title of the workshop is "How to build your app quickly and cheaply using the Cloud". Title and the broad agenda were set by the conference organisers, and now I've got to fill it with life. I like Miami, and it will surely be nicer weather than Germany in February can possibly offer.
By the nature of the agenda I've been given, the general thrust of this workshop will be introductory in nature. I plan to go over some of the more notable options, such as GAE or Heroku, and touch on the pros and cons of the various approaches (particularly the PaaS vs. do it yourself on IaaS debate), but then I intend to work through a concrete example of deploying and managing an app on EC2. I also want to have some time to talk about the issue of cost: that whether using the cloud as a hosting provider makes economic sense (or not) is very context dependent. And I will try to convince everybody that a combination of virtual appliances and SCM is a game changer in a lot of ways -- some of them perhaps not so obvious. Frankly, I also hope to be able to just have a good conversation with the participants about the challenges and benefits of using the cloud.
Obviously, this topic is not only a moving target, it's moving at warp speed, and I will have to keep the plan / content a little flexible -- the workshop I was thinking about doing yesterday is already outdated in the "managing your app in the cloud" department, and tomorrow, it will be some other aspect that is completely changed by some new development. More importantly, I would very much like to tailor the content to the participants, as much as possible. So, if you are planning to attend the workshop, by any chance, please let me know what you are interested in talking about, learning or experimenting with. I will do my best to work any suggestions, questions or feedback I get into the final workshop. Leave a comment here, send me an email, ping me on Twitter, Skype or whatever -- I'm easy to find. ;)
Sun, 4 Jan 2009 06:06:49 -0500
If you (yes, you!) know anything at all about designing, developing, testing or administering software systems, go click that link up above and take Thomas's survey about the relationship between the law (and associated risks) and code. Go on, now.
Sat, 20 Dec 2008 19:29:26 -0500This post has been generating some traffic in the last day or so. Since it appears that many of the people reading it are doing so for the first time, it seems worth pointing out that the content is also available in another format. This may be of interest, or it may not. This slide deck: Social ProcessesView SlideShare presentation or Upload your own. (tags: bpm social+networking+software) ... is isomorphic to the original post. The following slide deck, OTOH, is a follow up to it. Its purpose is to try and make some of the more abstract ideas in the first deck, well, less so. Social Processes Part 2 - show me the moneyView SlideShare presentation or Upload your own. (tags: bpms social+networking+software)[...]
Sat, 20 Dec 2008 08:32:08 -0500(object) (embed)
... then you might like this a lot better. I certainly liked it. ;)
Tue, 16 Dec 2008 16:27:40 -0500Warning: this is a long, complicated post. But, as I've explained before, that's what I think a blog is for. If you want brevity, try Twitter. Caveat emptor. Cloud computing. Buzzword ascendant. And for a good reason -- it's a big deal. I've been studying and thinking about it for the last year and half or so, and I'm convinced of that, at a minimum -- it's a big deal. But, having said that, as usual, I'd like to get a bit more precise. As with many buzzword ascendancies, this one is fraught with confusion. What is cloud computing, exactly? I got a chance to hang out with a group of really smart people from a wide cross section of industries in October, by taking part in the Leading Edge Forum's (LEF) annual Study Tour. If there was a single source of discontent amongst the attendees (CTOs, CIOs, CSOs, chief architects, and the like), before, during and after the tour, then that there is no one, simple, precise definition of the term. A different "definition" emerged with each vendor pitch -- indeed, we came to see that the definition sometimes varied on the basis of a given usage scenario. Many people found this frustrating, and I suspect they were not, and are not, atypical in this regard. But I also think there is an essential truth lurking there. The fact that the term "cloud computing" means different things in different contexts is telling. There was a feeling of "If somebody could just explain to me what this stuff is, then I would know if it's of interest to me, or not" amongst many of the other members of the aforementioned group. The problem, however, I think, is that there is more than one type of "cloud computing", and that the type that might interest an enterprisey CTO hasn't really emerged yet -- certainly not to the extent that the other primary type has. Now, by "type", I explicitly don't mean the whole soup of assorted buzzwords that are tightly correlated with "cloud computing" -- things like SaaS, PaaS, IaaS, and all the other *aaS's that Simon has grown fond of poking fun at. Instead, I mean categories that are distinct from one another in the same way that, say, the category of "the financial services industry" is distinct from "the health care industry". Within these broad categories, many of the details of the different types of cloud computing will be the same (or similar) -- including correlations to their *aaS's. So what types (or categories) am I talking about then? Well, here's where it gets tricky, as these are still emerging, and therefore there is quite a bit of fuzziness with regard to demarcating them. But I do think that there are at least two categories that have become distinct enough to identify and talk about, although as we'll see in the next few sentences, I'm going to struggle with naming them. So what are they? Well, for the lack of a better idea, I'll call them "startup cloud computing" and "enterprise cloud computing". This isn't an original insight -- several people have made the same observation. James Urquhart, recently acquired by Cisco to be their cloud computing Poobah, for example, calls the same categories "scale out" and "enterprise" clouds, riffing of of terminology from Frank Gillett, Forrester's cloud analyst, who called them "scale out" and "server clouds". Whatev[...]
Thu, 11 Sep 2008 08:13:07 -0400(object) (embed)
This video, the CERN rap, is painfully silly, and you will probably have seen it already (it's gotten some play in the MSM), but just for posterity's sake...
Wed, 27 Aug 2008 12:26:33 -0400
Generally speaking, I like the FASTForward blog. Some smart people, writing some interesting things. But this post is so wrongheaded, it's terrifying. Taylorism?! Good grief. Real industrial processes abandoned Tayloristic approaches as failures over 20 years ago, and there are some very good templates out there for how to apply what succeeded them to knowledge work.
The real danger I see this as being illustrative of is how easy it is to bend good things to serve evil masters. The road to hell is paved with good intentions.
Tue, 26 Aug 2008 08:15:01 -0400
I sure wish I was going to this event (and I see I missed the communal "blog about it" day by one), but I have neither the time nor the budget for it. I will be following the proceedings with great interest, however, and am looking forward to the results.
Mon, 18 Aug 2008 09:56:25 -0400
... the elevator’s emergency phone system was getting a junk phone call from a robot. The robot was telling the elevator that it “should act now to renew the extended warranty on its car.” We now live in a world where machines are spamming each other.
Emergent behaviour. The world is not deterministic, it is probabilistic. You can plan -- you just can't rely on it.
Mon, 4 Aug 2008 14:02:29 -0400Businesses are challenged by ever increasing degrees of complexity and volumes of data. Simply tidying up is an inadequate response. No matter how many filing cabinets you have, it’s never enough, and making use of information gets no easier. But, no worries: business processes, to the rescue! We’ll a) identify participants, b) flows of information and transactions and c) codify those. This is great! We can ensure that the right information is at the right place, at the right time. But wait! It gets better! We can automate those processes! We can use IT to route information, coordinate work, execute transactions, and make those (now visible) processes way more efficient! Cool! Business transactions will flow smoothly from customers and suppliers, to systems and workers in our organisation, and back. Our coordination costs will drop, causing our transaction costs to drop, and we’ll pocket more money. This is great stuff. Well, OK. Except it doesn’t work. The real world is so much more complex than we thought, when we started trying to model it.  We can’t seem to keep up. We’ve got all this stuff in place, and now all it seems we do all day is manage exceptions to our processes. And that’s now become a new cost of doing business -- one that’s not cheap. Clay Shirky once said: “Process is an embedded reaction to prior stupidity. We are often glad of this, of course; it explains a lot of what's good about the world. Our knives come equipped with handles,” he said, “and ... our programs include dialog boxes that say 'Are you sure you want to casually delete the last 3 hours of work?', all because of lessons learned from prior stupidity. But. But not all stupidity is amenable to deflection by process, and even when it is, the overhead created by process is often not worth the savings in deflected stupidity. Stupidity is frequently a one-off, and a process designed to deflect it within an organization actually ends up embedding it as a negative shape. Like the outline of Wile E. Coyote just after he is catapaulted through a wall, making everyone fill out The Form Designed to Keep You From Doing The Stupid Thing That One Guy Did Three Years Ago actually emphasizes the sense memoryof that stupid thing within the group. CAUTION: The beverage you are about to enjoy is extremely hot.” Riffing off of Shirky's remark, Ross Mayfield wrote a piece called "The Death Of Process" almost three years ago -- a piece which achieved, and continues to enjoy, a certain degree of notoriety. He said: Organizations are trapped in a spiral of declining innovation led by the false promise of efficiency. Workers are given firm guidelines and are trained to only draw within them. Managers have the false belief engineered process and hoarding information is a substitute for good leadership. Processes fail and silos persist despite dysfunctional matrices. Executives are so far removed from exceptions and objections that all they get are carefully packaged reports of good news and numbers that reveal the bad when it's too late. John Seely Brown and John Hagel point out that while 95% of IT investme[...]
Sun, 13 Jul 2008 10:38:38 -0400At the recent Google IO conference, Google Fellow Jeff Dean gave a talk about the "inner workings" of Google's date centres. There was a writeup on C-Net -- the talk seems to (re)use some material from the deck available here. This is fascinating stuff. Buried in this stream of data (in the C-Net piece, the money shot, for my purposes here, is literally in the last paragraph) is information about the nature of the architecture of GFS (Google File System) and BigTable, the technologies used by Google to store (and retrieve) data scalably. Arguably, the combination of GFS and BigTable is Google's cloud computing offering -- GAE (Google App Engine) is just one stack that might run on top of it. There's an aspect of this architecture that didn't get a lot of press, and doesn't seem to have registered with a larger audience yet, and I think it should -- if for no other reason than I think it hands Amazon a big fat advantage in European (and possibly Asian) cloud computing markets. In Dean's slides, there's the following statement: Scheduling system + GFS + BigTable + MapReduce work well within single clusters Followed by: Truly global systems to span all our datacenters Global namespace with many replicas of data worldwide In the C-Net article, he's quoted as saying: "We want our next-generation infrastructure to be a system that runs across a large fraction of our machines rather than separate instances," Dean said. Right now some massive file systems have different names--GFS/Oregon and GFS/Atlanta, for example--but they're meant to be copies of each other. "We want a single namespace," he said. So what's the big deal with that? Well, in a nutshell, European law versus U.S. law. Wildly different understandings of privacy and data protection, coupled with even more wildly different attitudes about government powers, result in a situation ripe for conflict. Things like the Patriot Act have resulted in a situation where European organisations simply categorically forbid any storage of data in the United States -- and note, for further splitting of hairs, that it's unclear what the "storage" of data really means, and it may be broad enough to include processing of data within U.S. jurisdictions, even if it's persisted elsewhere. There are laws on the books in several European countries (like Germany, where I live) that literally forbid situations like the ones that seem likely to result from Google's architecture. Now, IANL (I am not a lawyer), and it's possible that I am completely wrong for that reason alone. Perhaps a nuanced reading of European privacy and data protection laws simply makes the apparent problem go away. There's sure one hell of a lot of documentation to read on the subject, more than enough to keep any number of lawyers busy for years, as a quick glance at some of the links I've been squirrelling away on the topic should make evident. When I was at the Enterprise 2.0 conference in Boston in June, there was a session called "An Evening in the Clouds", and during a Q&A session at the end, I asked Google's Jeff Keltner about the issue directly. He kind of[...]
Sun, 6 Jul 2008 09:16:11 -0400Web Science: An Interdisciplinary Approach to Understanding the Web I'm not a big fan of the engineering metaphor for my job. I've long thought that it's much too limiting. The complexity of the systems I work with keeps exceeding the limits of everyone's expectations -- slipping the bonds of "engineering". Every time I hear somebody talking about "repeatability", or "rigour", waxing enthusiastic about some form of formal method, or (auto)generating some system from some abstraction / model, I think of Bertrand Russell. I think of him feverishly slaving away to trap all of the complexity of mathematics into a coherent, logical system. And then I think of Kurt Gödel, proving for all time that there can never be any such thing. Despite that sobering truth, life somehow muddles on, producing systems of astonishing complexity and robustness. Therein lies an important insight, I have long thought -- biological systems are excellent examples of how to build systems in general. That, in turn, ought to mean that biology, not engineering, is a much more fruitful metaphor (if we must have one, and if you'll excuse the awful pun) for designing and building computer-assisted information systems. In my second citation of CACM today (this issue is really quite good!), I've linked above to an excellent article, co-authored by James Hendler, Nigel Sahdbolt, Wendy Hall, Tim Berners-Lee and Daniel Weitzner, that resonated with me for many reasons. First off, having just recently attended the Enterprise 2.0 conference in Boston, and been puzzled by the lack of a holistic approach to all of the issues that I think are correlated with that buzzword, I find I'm not the only one to think such things. Second, for reasons that (I hope) will become obvious if you read the piece, my musings about biology vs. engineering are quite relevant to some of the things these authors talk about. With regard to the first point, Figure One in the article is a great picture of the interdependence of the "web" of the Web's architecture, and would serve as a perfectly adequate A/V aid for the kind of discussion Lee Bryant and I were having on the Headshift blog. With regard to the second point, consider the following quote from the article: Where physical science is commonly regarded as an analytic discipline that aims to find laws that generate or explain observed phenomena, CS (sic. Computer Science) is predominantly (though not exclusively) synthetic, in that formalisms and algorithms are created in order to support specific desired behaviors. Web science deliberately seeks to merge these two paradigms. The Web needs to be studied and understood as a phenomenon but also as something to be engineered for future growth and capabilities, Heh. That description fits the biology of genetics -- say, gene splicing -- like a glove. In fact, in a parallel that draws a hearty "lol" from me, in my current frame of mind, biologists sometimes refer to that field as "genetic engineering". More: At the micro scale, the Web is an infrastructure of artificial languages and protocols; it is a p[...]
Sun, 6 Jul 2008 07:06:58 -0400
Reading the latest issue of CACM today, and really enjoying the new format and content. The article linked above got me to laugh out loud, and I can't recall CACM ever doing that to me before.
It is certainly fair to say that XML is on average more self-describing than other text-based encoding languages, but that is like saying that the average dwarf is taller than the average baby; neither is tall enough to excel at basketball.
Apart from being funny, the article is actually a great "state of the art" overview of XML technologies in general.
Whereas tree trauma (discussed earlier) is a basic strain of XML fever caused by the various flavors of trees in XML technologies, tree tremors are a more serious condition afflicting victims trying to manage data in XML that is not inherently tree-structured. The most common causes are data models requiring nontree graph structures and document models needing overlapping structures.
Amen. Good stuff, worth a read.
Fri, 4 Jul 2008 07:32:03 -0400
Oh, boy, this is good stuff. Samples:
REST proponents would say that programming systems like the Internet is just sociological problem, as the technical problems are all solved by REST. I'm not far from believing in it myself, it's hard to see what is hard about loosely coupled distributed systems that can't be solved using it.
Not sure what's hard to believe. A REST-ful URI is just a lambda name! If you then maintain memory-safety, you now have a secure, distributed lambda calculus.
Besides URIs we have other defining traits in REST (e.g. HATEOAS, uniform interface), some of which aren't well mapped to well known theories. Also nothing says that a distributed lambda calculus is a good fit to distributed systems when compared to distributed Pi or Join.
The actual comment was believing that it's just a sociological problem. IMHO there's some issues that are still open (e.g. service versioning, authentication) and need some engineering work. Don't get me wrong, I've been working solely with REST based architectures this year and I'm not looking back to anything else (to work within the Internet).
Go get more at the link above. The chart alone is worth the price of admission.
This is all so bloody obvious. But as a long time fan of zombie flicks, the tenacity of the undead never fails to surprise me.
Fri, 4 Jul 2008 04:51:01 -0400Been playing with identi.ca since it officially opened to the unwashed masses yesterday. I find identi.ca very interesting, compared to its obvious rival, Twitter. First off, identi.ca is open source. That means we have all kinds of transparency into how it works, which will allow us to help it succeed (we'd be happy to help Twitter, too, but they follow another path, young Jedi). Second, it's designed from the ground up to be federated (although this part of the code seems to still be very rough). This is a huge difference, and makes Laconica (the open source platform that identi.ca is running on) very cloud friendly. Which, in turn, solves the scalability problems that have so plagued Twitter. Interestingly, as far as I can tell, identi.ca is, in fact, running on Amazon EC2 (which would have been my suggestion, had anybody asked ;)). It's still not clear how the MySql database in the background is architected, and I'd sure like to find out, but this thing is already light years ahead of Twitter's design (as far as we can tell). Third, it's designed from the ground up with plenty of XMPP (Jabber) goodness. Since it's only the second day of it's official life, you would expect there to still be lots of deficiencies in the overall platform, and there are some -- replies are at the top of that list, and Evan has let us know they're at the top of his list, too. But, despite that, all of this openness and the good design lead to an interesting, and virtuous, consequence: a lot of identi.ca's weaknesses don't really matter, because you can build around them. Thus, I had managed to cobble together a complete desktop experience for identi.ca that features a reasonably attractive "client" tool, and full-blown "ambient intimacy", within a matter of hours after signing up yesterday. Here's how it works, for the curious. First off, let's list the requirements / goals (I am an architect, people, come on) I want an easy-to-use, dedicated, and ideally, visually appealing client, in which I can read my identi.ca stream and post updates I want full blown "ambient intimacy", complete with semi-transparent popup windows Because Twitter is still where this particular social graph lives, for me, I want to be able to seamlessly cross post to both services. Non-requirement: I don't need all of that integrated into my Twitter client (Twhirl) (although I wouldn't object). I can live with a third little window on my screen (alongside Twitter and Friendfeed, running in Twhirl), if all the other requirements are met My setup achieves these goals. NOTE: this setup is peculiar to OS X on a Mac, and uses features of that platform that are unique to it. It may be possible to replicate this kind of a setup on Windoze, or (more likely) Linux, but that's not my problem. ;) Step One: Setup identi.ca for instant messaging (IM) Under the "Settings" link for registered identi.ca users is a tab for "IM". Set that up as shown below Step Two: Rig your IM service up to identi.ca I use GTalk as the ex[...]
Mon, 23 Jun 2008 07:07:21 -0400
Tom Rogerson, CTO of Financial Services for CSC in EMEA, gets interviewed here for ComputerWeekly, and says all the right things about where The Artist Formerly Known As
Prince IT needs to go. Good stuff, and worth a listen. The money shot -- that IT people need to change their default setting from "No, but..." to "Yes, and..." -- ties in very neatly with Lee Bryant's post about Enterprise 2.0 on the Headshift blog.
Wed, 18 Jun 2008 10:46:30 -0400I attended the Enterprise 2.0 conference in Boston last week, and it was time well spent. I was fortunate enough to be there all week, and thus had plenty of time to both take in the scheduled programme, as well as the various out-of-band, Twitter, back-channel, hallway / lobby / lunch / dinner / drinks conversations. At one point, somebody more clever than I remarked that the latter was the "metaconference", and I really like that metaphor. Hence the title of this post. I intend to talk here, as concisely as possible, about the metaconference, and of the people I met and the things I learned. I'll then do separate posts to talk about specific sessions or topics. Conference Structure The conference was set up so that you didn't pre-register for the sessions you wanted to attend -- instead, you made up your agenda as you went along. I'm not entirely sure if that was a deliberate act on the part of the conference planners, or simply a consequence of circumstances, but it worked surprisingly well. I was sceptical of this, at first, and there was the obvious downside: they never knew, ahead of time, how many people were going to show up for a given session. That presents logistical challenges, and they were visible -- the coping mechanism was "overflow" rooms, equipped with video screens, which was better than being excluded completely, but not particularly satisfying, particularly in sessions where questions were / could be asked, etc. Despite the obvious drawbacks, the system worked, overall. And it was conducive to the kind of ad-hoc decision making that I prefer, myself, so I actually found it quite pleasant. Infrastructure The wireless LAN was catastrophically bad, and it was a constant distraction and irritation. Everybody complained about it, both the organisers and the hotel were certainly aware of it (some, but not all of us, got an amusing letter from the hotel's GM, along with some fruit and a bottle of water, as kind of consolation prize), but there seemed to be nothing to be done about it -- it was bad at the start, and never got to a "good" level, despite obviously fevered efforts. Shrug. Bad publicity for the hotel, and for Sprint, the ISP. Opportunity lost. Themes I found two main themes present at the conference: social software and cloud computing. The former was the prevalent one, which is fair enough, given Andrew McAfee's definition of the term "Enterprise 2.0". Cloud computing got "sold", in a comment at one of the sessions, as an "enabler" for social software -- I believe it was Google who said that. Which is also fair enough, if limiting. But I've talked about the intersections of these themes before, and I won't dwell on it here. Apart from those explicit themes (some of which I hope to address in some detail in later posts), there were also two implicit themes (or perhaps memes) that struck me. First, there was a lack of consensus about what the term "enterprise" actually denotes. [...]
Fri, 6 Jun 2008 10:09:41 -0400I attended the Enterprise 2.0 Summit at CeBIT in Hannover last March. It was great, and it's kind of embarrassing that I'm just now getting around to mentioning it. It was a fairly small conference, but the speakers were almost uniformly stellar, so the signal-to-noise ratio was about as good as one can hope to experience for such an event. I met a number of people in carbon (including not just the speakers, but attendees as well) with whom I either had a passing acquaintance, or had been following from virtual afar, either by reading their blog, or following their Twitter feed and whatnot. Dion Hinchcliffe, Euan Semple, Craig Cmehil and Simon Wardley, for example. Among the people I met who were new to me, the absolute standout was Jenny Ambrozek. In the context of the conference, Simon described Jenny, at one point, as "one smart cookie", and I can only confirm that. +1 Jenny gave a talk entitled "Structural Holes & the Space between the Tools", and it just left me fired up, frankly. The reason for that is fairly simple -- stated rather plainly in Jenny's talk was the outline of some real hard-core theory that not only explained the intuitive appeal of social networking, but also posited ways of using networks to entrepreneurial advantage (where the term "entrepreneurial" has a quite broad meaning, as much sociological as economic). So why did this excite me? Well, herein I heard an explanation for both why social networking (and the software that supports it) is interesting, as well as an explanation for why these ideas might be useful and profitable. In other words, I heard the "why" of Enterprise 2.0. In much of the existing literature about the term, the primary focus seems to be in defining what it is, and less the why. What there is said about the why seemed (and seems) to me to be superficial, and full of assumptions, many of them a priori, or, at best, based on the extrapolation of anecdotal evidence (always a shaky proposition). It's not that I don't agree with these descriptions or assumptions -- like many people, I find them appealing and interesting on the merits of their surface alone. On the other hand, when it comes to actually doing this stuff -- implementing a strategy that explicitly takes these ideas into account -- I've found that the fairly superficial level of discourse hasn't been that helpful: it doesn't provide a degree of structure and guidance that I would like to have. One of the things (perhaps the primary thing) that I dislike about "SOA" is a similar lack of theoretical rigour. Every conversation about "SOA" begins and ends with talking about what the term means. Moreover, the lack of clear meaning has allowed every vendor to define it as they see fit, and a consensus has only emerged fitfully, slowly, and painfully (and arguably, too late, as the ongoing REST-vs.-SOA wars (a misguided effort, but an understandable one) attest). The clearest indic[...]
Tue, 3 Jun 2008 12:06:23 -0400
Sam Lawrence on ways of encoding reputation, and why that might be a good thing.
I've speculated about this before -- in fact, I think it would be a killer attribute for SOA, and is therefore much more broadly interesting than Sam suggests. To wit:
One of the under represented aspects of the natural needs of a service oriented environment is credibility. In the ideal SOA world, your component goes out, "into the wild", searching for a service implementation that matches a specific interface and provides certain information. What does your component do if it finds multiple implementations, each of which meets all of your other selection criteria, such as performance, cost, completeness, whatever. Under such circumstances, you'd need to make a judgement based on something quite similar (if not identical) to credibility in human relationships -- what is the reputation of service X compared to service Y? Who do you believe?
So let's play the scenario out -- how would our theoretical agent/component, in some futuristic SOA environment, deal with such fuzzy choices? I think one possible valid solution would be to provide a mechanism to "change our minds". By that, I mean the agent would need to be able to do something along the lines of the following:
- Evaluate the various offerings from the various available services
- After filtering on the "objective" criteria (method signature, QOS promises, etc), if there are still multiple choices, apply "subjective" criteria, such as reputation, degree of satisfaction with past performance, and so on.
- If there is still no distinct choice at this point, decide at random, AND (and here is the critical bit) "remember" the alternatives in some persistent way
- If, at some later point in time, we become dissatisfied with the answer we received from the service we selected, we would invoke a kind of exception handling/rollback sort of mechanism, and "change our mind" -- we switch to the alternative service.
Note that, to really model halfway human behaviour here, we'd need some sort of polling mechanism as well, in that last step -- we'd need a way to "keep an eye on" the alternatives, as one possible motivation for "changing our mind" might be as simple as one of the alternatives suddenly offering a superior solution.
Mon, 5 May 2008 15:09:24 -0400
A buzz of blog activity, whose trail of breadcrumbs I found via one of my favourite folk, John Mettraux, led me to some deeply cool stuff. Here's a List O' Links
So what's the fuss about? WfXML-R, which is a codification of WfMC standard interfaces and entities as RESTful resources. This is great stuff, but then, I would think that. This is a moving target, still unfinished -- in particular, the tricky bits about access to a work item, and how to use it to make state changes, are still unspecified. But that doesn't mean that this isn't (potentially) the coolest BPM related stuff that I've seen in the past half year.
Tue, 26 Feb 2008 10:14:36 -0500Politics are important and unavoidable. Many, if not most (sane) people find politics unpleasant, and struggle, often in vain, to avoid contact with the topic. As architects in large enterprises, politics makes up a non-trivial part of our jobs -- no way for us to dodge that particular bullet. One of the classic works on systems architecture, The Art of Systems Architecting (Maier, Rechtin), devotes an entire chapter to the subject. Let's define "politics". Wikipedia says: Politics consists of "social relations involving authority or power" and refers to the regulation of a political unit,  and to the methods and tactics used to formulate and apply policy. As we can see from this definition (and it's references), politics is about power and decision making. As I write this, I'm engaged in an interesting discussion, behind the firewall, with other architects in my organisation, about the skills, talents, mindset and knowledge needed to be an "architect". We're debating reading lists, soft skills, and talking about things like SWEBOK and equivalents. It's a good conversation, and I'm enjoying it. I hope that something will emerge out of it that will help us to improve how we develop and recruit architects in the future. It's a very frank, open, and honest conversation, where we're being candid about ourselves and our own weaknesses -- IOW, the best kind of conversation there is. But that's not the point of this post. This post was triggered by something that was said in the course of that conversation (which is being carried on via e-mail in a fairly small group) -- "we should probably be doing this on the wiki". :-O I had two immediate reactions to that. One was -- "Yeah! Let's do that! Great idea!" Here's why: our conversation is a great depiction of experienced, talented, thoughtful architects (well, with the obvious exception of myself) doing their job -- thinking deeply about issues that are important to my organisation. Of course that should be in the wiki -- I immediately imagined a bright, ambitious kid somewhere in Chennai, herself right on the cusp of making the jump to something more than merely cutting code, reading our dialogue, and lightbulbs of insight popping like firecrackers in her mind. What a great thing that would be! If those folk could be flies on the wall during our conversation, how much more useful (above and beyond its immediate goals) our conversation might be! Brilliant. Let's start copy and pasting it to a thread in the wiki right now... But as my hand moved to the mouse to do just that, my second reaction kicked in, and I hesitated. My second reaction was all about, and entirely motivated by, politics. Recall the definition of politics from the start of this post, and then consider that I said we were having "a very frank, open, and honest conversation, where[...]
Sun, 24 Feb 2008 13:32:40 -0500
Microsoft and marketing. A long, and mostly sad story. The stuff behind that link isn't going to change that, but it is funny. I'm a Mark Steele fan, now. And the kid's book is brilliant
Big people have a server at the office.
The office is a boring place where big people go and do boring things.
Offices are why big people get grumpy, and say bad words.
The product itself (Windows Home Server) is a dubious one, IMHO. And a sense of humour is tough to reconcile with The Borg. Nevertheless, if you've got nothing better to do...
Tue, 22 Jan 2008 16:00:45 -0500
The reports to the contrary are exaggerated. When I started this blog back in 2005, I often went a month or longer between posts - nobody noticed. But somewhere along the line, I seem to have picked up a few readers, and I've literally had people coming up to me in the hallway at work with comments like "Is there something wrong with your blog? Is it broken? There're no new posts!" Ha! So, I suppose I either provide a reliable source of amusement ("Look, Masterson's made a complete ass of himself again! Woot!"), or I occasionally say something useful. I will assume it's the former and aspire to the latter, and sometime soon. Promise.
Mon, 24 Dec 2007 10:59:17 -0500
To all the readers of this blog, faithful and demented as you may, in turn, be, may you have the best of all possible holidays, should you happen to be celebrating any. Peace, joy, fulfillment -- these are the things I wish for us all. Cheers, Mark
Mon, 24 Dec 2007 07:14:56 -0500IFTF's Future Now: Human evolution is speeding up OK, and now to complete an "Alex triple play", here's a post from the IFTF's Future Now blog, pointing to this piece from the Guardian. Very interesting. Let's accept these conclusions as fact, for the sake of argument. Now, let's consider this recent post from Ed Yourdon. Quote: And so it is today with social networks. It doesn’t matter which ones you belong to; the point is that, to increasing degree over the next few years, if you adamantly and noisily refuse to participate in any of them, an entire generation of people who do use these networks will conclude: you’re irrelevant. They won’t bother trying to convince you or persuade you; they won’t object, protest, march, or complain loudly. They’ll simply ignore you. It’s okay with them — and if it’s okay with you, then everyone is happy. But if you wonder why fewer and fewer people are paying attention to you, there’s a reason … I began to notice this a few weeks ago when I started sending out Dopplr invitations to friends and business colleagues — mostly of my own middle-aged generation — whom I would enjoy meeting up with while on out-of-town trips. Thus far, roughly one-third of the people I’ve invited to join Dopplr (which, of course, is free) have accepted; but two-thirds have simply ignored the invitation. One of them said to me, in person, “I don’t know what this is, and I don’t know why I would want to use such a service — and besides, it looks too complicated.” To which my response is simply a shrug: you’ve just become irrelevant. As a result, I find myself slowly building a new network of friends, colleagues, and acquaintances … and slowly leaving behind a much larger network of friends, colleagues, and acquaintances I’ve built up over the past 40 years of my adult life. It’s not that I dislike any of my old friends and colleagues … but it’s almost as if they’ve consciously chosen not to have an email address, not to have a cell phone, and not to have a fax number. Hey, that’s fine; Western Union and the Pony Express are out of business, but if I have to write a snail-mail letter to communicate with my old friends, I guess I can do it once or twice a year. But in the meantime, there’s a younger generation that’s learning how to communicate, collaborate, share ideas, and keep track of each other’s travel plans, and day-to-day activities through a variety of new networks. So, what sort of evolutionary, selection pressure is being exerted on us by the overall acceleration and expansion of our intellectual horizons that socia[...]