2006-10-30T12:49:15.300-05:00I’m a strong believer in the ‘On-Demand’ model of software, also called Software-as-a-Service’ (SaaS). When I see articles titled “IT Execs To Vendors: Your Software Stinks”, it only increases my believe that ‘On-Demand’ might be a great way to give your traditional tool vendors a kick in the SaaS and accelerate your implementation of ITSM at the same time.
2006-07-29T23:06:47.526-05:00Starting your ITIL journey with a very complex, usually expensive, lengthy and often invasive technology-based initiative may only serve to increase the divide between IT silos and, more importantly, IT and the business. In a similar vein, too much focus on process may simply lead to more policy and procedure manuals that sit on a shelf.The problem with Change, Configuration and CMDB implementations is they do not really enable a real-time connection between IT staff, and between IT and the business, which tends to perpetuate vicious cycles of tribal warfare. “When people lack connection to others, they are unable to test the veracity of their own views, whether in the give or take of casual conversation or in more formal deliberation. Without such an opportunity, people are more likely to be swayed by their worse impulses….”- Robert Putnam (2000) Bowling Alone: The collapse and revival of American community, New York: Simon and Schuster: 288-290In the book Bowling Alone, by Robert Putnam, “Putnam warns that our stock of social capital - the very fabric of our connections with each other, has plummeted, impoverishing our lives and communities … we sign fewer petitions, belong to fewer organizations that meet, know our neighbors less, meet with friends less frequently, and even socialize with our families less often. We're even bowling alone.”The focus on Process (BPM, ITIL, CobiT, et al) and Products (read CMDB, SOA, et al) by IT leads me to believe we’re talking more than ever – but sometimes communicating even less than ever before.I like the concept of blogging so much I’ve found myself actually Blogging Alone! (Personally, I’d rather bowl alone than blog alone, so please visit my blog!) The hype around the CMDB can have a similar effect on your ITIL implementation. It’s the PeopleIt’s the social networks that really make things happen in most companies, not those dusty old policies and procedures. It is the network of people-to-people commitments that are often what make things go (or not go).So, when looking to embark on a ‘quality journey’, remember at the end of the day it’s the people --- and that intricate social network of commitments – that are often the ‘current state process’ and that people may fiercly protect this tribal knowledge.Process, Products and Paradigm ShiftsIn a recent webinar more people were familiar with the CMDB than with ITIL (see EMA’s webinar: CMDB Adoption in the Real World - Just How Real Is It?), which was interesting considering that the CMDB is very much an ITIL term. Just shows you what market opportunity will do to reality.Getting your IT staff to achieve the paradigm shift to a services orientation is going to require people skills more than anything else, and your selection of tools --- particularly early in the journey --- can significanlty impact how people react to the implementation of IT service management.Services, Stakeholders and Real-Time AnalyticsStakeholders & Services targeting is a fundamental best practice that is often ignored or skipped as customers try and “accelerate” implementation. This often means the implementation of ITIL considers the business from afar, rather than part of a cross-functional team.While this may provide an easier path to get the ball rolling, at some point the business had better become part of the team. Process and commitment based stakeholder analysis leveraging both business and IT tracks can ensure that all stakeholders are included and services are understood from the customer’s perspective.Starting with the end in mind assumes IT truly understands the business process, when sometimes that process is not that well understood even by the business! It also drives participatory decision techniques, which are successful more than 80% of the time.In addition, Product-led ITIL implementations are likely to focus on the technology, particularly when the supplier is also driving process improvement activities. (The ITIL literature has spoken at great length on this subje[...]
2006-07-05T18:02:10.360-05:00Have you heard the good news?The 800-pound gorillas have formed an 'alliance' in order to provide interoperability between their respective CMDBs. Of course I thought OASIS~DCML has been working on that, but I admittedly couldn't tell you technical folks squat about OASIS~DCML or what the hell our 800 pound friends are up to...maybe somebody who can translate the geek-speak into a language we can all understand will help us...as for me, I got some serious deja vu going on....but perhaps more importantly, if you're implementing --- or want to implement --- IT service management best practice based on ITIL how does this impact your Road Map? Should you shout halleluia and just trust your 800 pound gorilla of choice to provide interoperability someday as promised? (If you do, I have a bridge I'd like to sell you)...no, there are other (safer) options available to you. You KNOW that you must start with an analysis of your current processes first, so don't even think about tools until you've completed this step. However, if you've analyzed your processes and believe that automation (via a CMDB tool) is in order consider this:1) The CMDB, like any tool, must automate your processes based on 'Where You Are Today'2) The CMDB, like any tool, must provide a clear business case3) The CMDB, like any tool, should create value to the organization QUICKLYOf course, even if you decide you want a CMDB you'll have to understand and define those nasty relationships between CIs (which really means at least some degree of SLM --- ok, ok so we buy an SLM tool right? NOT )...and how about the fact that (according to IDC, et al) most of the savings attributed to IT service management seem to be focused on more effective and efficient problem isolation & diagnosis (see Building an ITIL Business Case?...Slow & Steady Wins the Race)finally, ask yourself: How long will it really take to achieve a CMDB as ITIL defines it? (see some interesting discussion at the ITIL skeptic)While it's hard to question the staying power of 800 pound gorillas, there are some tenacious little badgers in the forest that can really help focus your Journey on the Right Path without holding you hostage waiting for interoperability nervana. One of these is one you've heard about from me many times, as I'm a former customer and 'true believer'; a small firm called eG Innovations.This company has spent less time hyping ITIL and CMDB and much more time keeping thier eye on the effective & efficient problem isolation ball...quite simply, the software leverages a patented data flow and dependency based correlation logic that enables them to monitor what is happening at every layer of every component of an end-to-end business service, and automatically identify which layer of which component is the source of a problem. No rules to write, no code, no kidding!As important, they do this across 75 major applications and platforms out of the box. I mean, out-of-the-box -- up and running in less than a day.They call this 'service monitoring'. I call it 'A Good Way to Achieving the Paradigm Shift Required for Higher Levels of ITIL Process Maturity While Establishing a Foundation for a True CMDB and Letting the Gorillas Know That a Vision Won't Put Fires Out'. OK, I admit it's not a very catchy marketing slogan, but you get the point don't you?You can wait for the 800 pound gorillas' Vision to become reality, you can Trust them now (and hope for the best) or you can focus on other areas that bring value (immediately) and gradually build the foundation of knowledge you'll need anyway to populate those beasts.So get on the Right Road, but keep those headlights on![...]
2006-10-30T12:26:34.890-05:00Somebody asked me about "Quality of Experience' (QoE) recently. While those in technical silos may offer a brilliant dissertation on abstract polymorphic interfaces in clients and servers (see Button, Button, Whose Got The Button? Patterns for breaking client/server relationships, by Robert Martin) or even Distributed QoE, my answer is quite a bit simpler.In my experience users will (usually) let you know if their experience is poor. You don't like the program, you change the channel. You get bad response from a web site, you shop somewhere else. So, of course you want to know what your users are experiencing!However, different IT tribes will want to measure QoE for different reasons. Many want to be warned of a storm brewing, and be well prepared to explain why thier tribe is not the source of the problem (or fix it before they call). I call this the "it's the other bastards fault" motivator.QoE is absolutely consistent with best practice. However, when investing in QoE technologies one should be careful of who is defining QoE (i.e., QoE of what services?). It should be the customer (read business process).Two things worth considering before putting your budget dollars on the line:1) Defining 'end-to-end' - Citrix access services is NOT a business service. It may be a critical segment of an end-to-end business service, but it is rarely the entire service. So, having 'end-to-end' knowledge of the Citrix servers right to the desktop is great --- but most business service infrastructures have a dizzying array of network devices, web servers, application servers, data base servers and applications. 'End-to-end' means every layer of every component required to support a business process.2) What are you prepared to do? - So you took the plunge and purchased a QoE tool. Now that you've been warned (pray that your investment will warn you of an impending storm; otherwise your user could have told you --- for free), how will you isolate and diagnose the problem? Nice to know it's not in the Citrix server or the client, but then where is it?This is where analytics come into the picture (see Analytics & IT Service Management on this blog), and things can get really complicated. However, it's good to focus on this objective:The key to effective business service monitoring is the ability to monitor what is happening at each layer of the infrastructure --- across an array of distributed network, system and application components --- and automatically identify which component layer, in which domain, is the source of a problem.QoE decisions, like many technology investments, can be tribally driven. This is particularly true if the organization has not invested in the time to understand and define 'what is a service' and performed some due diligence in analyzing processes.Some IT tribes will display true leadership and go beyond thier comfort zones by incorporating other technical silos into the equation, but I suspect this is going to be difficult for many. Taking an approach driven by best practices (such as ITIL) can help avoid experiencing the angst associated with knowing with absolute certainty where the problem isn't, but not knowing where the problem is.[...]
2006-02-13T21:15:13.090-05:00I saw a number of recent articles and webinars using the term 'analytics. One of them, 'Analytics' buzzword needs careful definition, by By Jeremy Kirk, IDG News Service, 02/13/06 you may have seen in the recent NetworkWorld Newsletter.
2006-01-27T12:12:12.083-05:00It's no accident that the evolution to SOA and adoption of ITIL are happening at the same time. IT Service Management is, as the name indicates, about Services. If you're evolving to Service Oriented Architectures (SOA), then adoption of IT service management based on the IT Infrastructure Library is simply common sense. The supporting infrastructures enabling SOA are typically n-tier --- web front ends, application servers, data base servers, etc.
2006-01-23T10:38:27.920-05:00Monitoring their IT-infrastructures has historically been an afterthought for organizations, priority has always centered around application development and deployment. Hence traditionally the monitoring industry always lags behind a little bit as technologies evolve and shift in the landscape of application development and delivery. In recent years there is an emerging disconnect in the market between how applications are designed to work and how they are being monitored. Lets take it from the beginning... In early days applications were relatively simpler. Twenty years back, there were mainframes and clients, so if there was a problem it was very easy to locate, it was either in the mainframe - which affected everyone - or in the client. They had two relatively easy pieces to monitor. Most of the legacy players in the monitoring industry today evolved at this stage.Later came the networking era where networks became a lot more complex and problems at the network level became an industry nightmare. At this stage every single problem was blamed on the network and most of the times it turned out to be true. So many tools cropped up especially to deal with monitoring networks and isolating issues at the network level... Over a period of time networks became more stabilized as the networking technology improved. But the fiasco of early days left such an indelible mark in the industry that even today in most organizations network department is really a secretive cult and no one outside of it gets to know their internals. The legacy monitoring players took time to get the network piece right but they eventually solved the network puzzle to a reasonable extent... so the market now has a set of key players who can do Client/server, legacy and network monitoring well.By the time this played out the technology in the application development and rendering landscape has moved on... to n-tier architectures. N-tier architectures provide extreme flexibility, portability and scalability to application services. IT-industry has embraced the n-tier distributed architecture for its effectiveness and cost efficiency. This is the preferred architecture for the omnipresent web-services. An unattractive side effect of the n-tier architecture is that it introduced an amazing amount of complexity in the delivery infrastructure. Now multiple applications written in multiple languages running on multiple pieces of hardware must co-exist for the service to be effective. Due to the interdependency any small issue on one of these tiers tend to have a big impact on the service in a cascading effect. This coupled with the complex nature of the systems makes the process of isolating and identifying issues within the system a nightmare.The solution put forward by legacy monitoring players to this problem is silo monitoring… effectively a tool to monitor every tier. In this model say for a simple Web-service you would have 3 different tools monitoring 3 different tierss (web, app, db). These tools are strong in their own domain and need a domain expert to run it. When there is a problem in the overall service you have different tools run by different domain experts who need to be brought together to identify what is the root-cause and what needs to be fixed. Since there is no transparency across the layers, most of these meetings turn into an exercise in the Blame-game. People tend to get defensive about their tier and it takes an extraordinarily long time to isolate even the simplest problems in this model. Hence the approach to monitoring the n-tier architectures by monitoring every tier individually as proposed by legacy monitoring players doesn’t work. This is the primary reason for the chaos in service delivery and affects the quality of service delivery even for fortune500 companies.The right way to do this is to monitor the entire service as a single atomic unit instead of individua[...]