Subscribe: Virtually Insane?
http://vmlover.blogspot.com/feeds/posts/default
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
backup  based  cloud  microsoft  paas  provide  public cloud  public  service  services  storage  strategy  virtualisation  vmware 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Virtually Insane?

Virtually Insane?





Updated: 2018-03-02T15:56:33.292+00:00

 



VMlover is moving...

2010-03-24T12:28:43.642+00:00

Hi all, I am currently in process of migrating this blog to Wordpress and on a private host....

Apologies for any transition delays in the process, I am hoping the move to Wordpress will give you the reader an all round better experience.

Hopefully the domain will transfer in the next 24/48 hours/years



Life on the other side

2010-03-10T21:10:30.761+00:00

Ex-CEO of SUN Microsystems just started his blog with a bang....

http://jonathanischwartz.wordpress.com/

A good read and i'm hoping a book will appear soon...



Now for something Cloud related...

2010-03-10T22:13:57.849+00:00

This post is short and sweet....

After a week of more Cloud news it strikes me that Cloud is the new Black. In the industry we have big large software corporations who have made their owners and shareholders extremely rich with 20-30 years of business strategy that was a million miles away from Cloud with Client-Server models, most of which are all now pushing the future of their business strategy on the Cloud model.

This has started to sum one thing up for me....mainly around the fact I think
ISV's want to host services in the Cloud to ensure they do what humans do and learn from there mistakes, with these key feats of history never beckoning their door again by
  • Having complete centralised big brother monitoring of offered software usage, with no stone unturned for organisational usage of offered cloud services. This means no more true ups and no more EA's or GA's being operated on a trust basis
  • Being able to change Software license models at the drop of a hat (a bit like your gas/electricity bill),
  • Design applications that do not have excessive and long development periods to mean that they get the potentially underdeveloped software out to the end user and they can start to be more lucrative than competitors, with that under developed software being able to be amended at later points in time,
  • Kill the VAR/SI relationships for quicker PO's and to reduce internal ISV overheads of relationship management
So there you go...i've made some predictions, fail they may but I am more than sure people will start to realise this indirectly in some shape or form.



Home NAS - Iomega IX4-200d

2010-03-09T09:13:11.192+00:00

I had the great pleasure of getting my hands on a Storcenter IX4-200d monster NAS box and and I can safely say it has been a massive and beneficial peice of kit for use with my day to day home storage needs. Here is my review and feedback to what I think is a great product for the home and anyone wanting to do some good home lab testing.Previously I ran VM's from a USB Drive with WS7, with about 3 Individual SATA disks homed in my local PC for other storage such as iTunes, MP3, Movies and Pics etc. This worried me, it wasn't raided, it certainly was starting to run low on free space and there is so long you can take running ESX in a VM on WS7. So I was looking around for a NAS solution which would be cost effective, fitted the bill functionality wise and provided good all round storage space. Originally i'd dabbled at the lower end models of the IX2-200d and thought maybe it would suffice, after a think about it something said to me though that an investment in a 1TB (With RAID of course) NAS would suffice on first 6-12 months but over time as my lab grow would probably end up with me wanting more space at some point so I felt the IX4-200d was yes intially oversized at 4TB (2.7TB Usable with RAID 5) but I know it will keep me more than satisfied for future storage needs.Key selling points and benefits I've found for the IX4-200d are as follows;Small form factor, it looked MASSIVE on the Iomega site and I imagined it to arrive like an EMC Celerra would! However I was so surprised when I got it out of the box (which was rather small too I might add) that it was no more than about 10x10 Inches which was excellent for storing in my small lab :)Setup - Setting this up was a breeze, i was literally up and running in about 10 Minutes, it is RAID'ed from the factory (additionally supports RAID 1,5 and 10) and ready to go prepackaged, so easy and to be honest this is the way it should beProtocol/Application support - iOmega is an EMC company, and all iOmega Storcenter devices are fully HCL compatible with vSphere/ESX when using iSCSI and NFS, this is great as I use this for home lab (see next section on this). Additionally DLNA support is available so I can stream movies to my DLNA compatible TV and also PS3! again excellent for playing music and browsing my camera photosSpeed - Doing time machine backup was about 20-25MB/s and general VM across NFS is really quick. Also copying files up and to is more than acceptable speed wise along with vSphere VM's running and cloning etc quite happily at an acceptable speedFile Migration - A cool feature is being able to connect my original USB disk and copy that onto the NAS without copying files across the network, I can also use the USB drive across a CIFS share all very good timesavers.LED Screen - This is native to the IX4 and not available on other versions, you get a nice little interface to see storage volume capacity and other stuff like IP Address of the box etcAs said I use this at Home for my home lab, in this lab I have recently purchased a HP ML110 for lab testing and this works very well with the IX4-200d. Coupled with good vSphere kit the IX is a great piece of kit for consolidating out both you home media storage needs and Home Lab needs onto one easy to use and manage NAS, it has Block and File level storage capability which is great for playing with both versions of storage protocol and overall when used with 1GB networking is better than any extremely beefed up PC with Workstation 7. Overall I am very pleased, I am an Nandy pandy hands off Architect in my day job but using this provides me with all the needs that I would typically get when both using vSphere day to day or within a lab environment back in the office.Product feedback requestsNow for the bad bit....what the IX4-200D doesn't do for me;Mozy support - I use Mozy backup and it only supports Mozy backup when the IX is connected t[...]



Application efficiency in the Cloud

2010-02-09T22:06:09.586+00:00

As an afterthought after visiting a few recent Cloud Camp's and after talking to some very bleeding edge and clever individuals who develop applications and object based storage for the cloud I felt compelled to blog about my thoughts on this subject.

It appears that the mainstream adoption of Public cloud is starting tochange the mindset on how developers write code for web based applications, certain disciplines in the dev world are really are changing how they architect applications, additionally another news item that maybe concludes my thought is Facebook recently releasing detail on new PHP code that they claim reduces 50% CPU Overhead over legacy code.

This is great news, developers whom are building applications using compute within the abstraction layer of the Cloud seem to be now moving away from designing and developing lazy code that used to be unoptimised and hog as much Infrastructure resources as it could take, didn't scale horizontally as it needed to use hardware which scaled up (such as good old Mainframe) and additionally was heavily underutilised at times when it didn't operate its associated task.

So why do I think this is happening? well its simple....COST VISABILITY! Apologies for the CAPS here but I feel any direct metered costs which are applicable instantaneously to any typical public IaaS model or PaaS is driving this optimisation, Early adopters of cloud are starting to realise that they have a driver to save cost with an optimised platform that dosn't need uber amounts of compute resource and is cheaper.

Additionally at a higher strategic level I certainly think cost visability seems to be becoming higher the further you go into the Cloud, the below diagram shows how the Insourced model of IT has very low cost visability (in most organisations) against the transition into Outsourced/Managed Service IT that is billed to the client on monthly time periods all through to Cloud hosted IT components that have instantaneous cost visability to consumers.


Summary 

So thats my summarised thought, you might think I am being Naive, but I do beleive we maybe starting to see the growth of more cost concious optimised applications for the better of both App and Infrastructure environments rather than the legacy world where they have constantly acted like Oil above Water.



Cloud Overbooking - Part 2

2010-01-28T20:01:45.291+00:00

Following last weeks post which went into where I think the potential requirement for similar algorithms that are used within the airline overbooking model may need to be used within a Public cloud provider such as EC2, I will now blog with a second part which provides some predictions on where I think that similar software and associated modules that may start to arise within the world of Cloud providers.As stated in Part 1, I work for an airline, this dosn't mean this post will include industry based secrets, but what I will provide is a comparison of technologies used to ensure the overbooking strategy works. The cloud revenue calculation thingy Today within the Airline industry are various commercially available software technologies that calculate what an airline can make from various different seating strategies on certain key flights, dont ask how this does it but the companies that have designed this are certainly not short of a bob or two, meaning its very niche and very clever (and works).Looking to compare this to the world of Public clouds and I think we may see Software ecosystems arise as has arisen within reservation and booking worlds. A couple of thoughts collated that I think may or may not emerge withing the future state of cloud computing include; Potential third parties selling third party software to Public cloud providers to calculate optimal times or prices to charge customers, or do Amazon already do this?  If cloud is going to provide fluidity and flexibility than say your Electricity in the home will we potentially see variable seasonal or peak pricing charges emerging once the Cloud starts to become more heavily adopted and resource becomes scarceWill we see larger customers obtaining a first class citizenship in multi tenancy environment and recieve higher weighting and priority when resources become scarce, the same as Airlines do in similar ways for frequent flyers, or will a model exist where they be exposed and at risk more like Economy travellers are in the Overbooking model where they pay smaller prices for services but run the risk of being bumped off of the core underlying cloud service? I am only speculating here, its difficult to know what really goes on within public cloud business plans but it maybe potentially something that may start to become more apparent as people transition from conventional outsourced models into cloud based environments. Screen Scrapers Just another crazy thought that i'll leave you with which is completely seperate to overbooking and is in regards the potential role of screen scrapers in "Cloud commerce". In the reservation world, screen scrapers play havoc on travel industry websites if they are not controlled, in a nutshell a screen scraper is basically a third party whom are scraping say a Airline booking site to scour for the best deal. If not controlled correctly, scrapers play havoc with underlying ecommerce environments because they consume transactional space and mean the real humans end user experience who is using the website directly suffers. Screen scrapers can work in an airlines favor though, some airlines have agreements with some third parties to "scrape" and some airlines have partnerships with third parties who provide indirect services. So within the world of Cloud services are we going to see an influx of parties screen scraping big players like EC2 and draining ecommerce portals? Imagine hundreds of screen scrapers upon screen scrapers scouring main portals to see if EC2 has a good price, suitable AMI's, suitable SLA's (dont laugh), and many other charateristics within? It would degrade end user services and potentially steer them to competitors......Just more crazy thoughts that i'll leave you with.Thats all folks until next time[...]



Cloud Overbooking - Part 1

2010-01-19T22:08:36.141+00:00

Now for something cloud related as I haven't waffled on about cloud for a while. This Two part series (it got too long for one post) is based upon oversubscribing or over allocating strategy within public cloud world. Within this first part I will use the current Airline reservation overbooking strategy and use this as an example to potentially see where similar algorithms may start to be needed to calculate workload allocation in a typical open Public Cloud provider. This post was also super charged by this excellent post on what the blogosphere classes as the difference between capacity oversubscription and over capacity models within the Amazon EC2 service.So ever been bumped up or bumped off?No this isn't a question about your mafia status, I am talking about flight bookings. As you may have noticed from the "about me section" I currently work for an Airline, with this in mind I will use some of the (small) knowledge I have gained on how the oversubscription model works in our world. It is a well known fact that the Airline industry falls into a number of industries that "overbook" on certain flights, see this definition for full gory detail on how this whole process works behind the scenes but in a nutshell it is an algorithm used by the travel industry to work towards achieving full capacity on certain flights by taking more upfront purchases than is available in the reservation system. Overbooking tends to affect the lower entry level economy passenger who is paying less for his seat and is likely to be less of a regular customer, lastly overbooked passengers are all covered for compensation in many shapes and forms such as being offered either a seat on the next available flight or a volume of cash that makes them happy.So hopefully after reading the brief detail on how the overbooking model I am beginning to think we are going to see a overbooking or oversubscribed type strategy needing to be adopted within Public Clouds. To justify my comparison, simplistic marketing from Public cloud companies state that you can buy a workload in EC2 from the Cloud provider and assume it will be able to provide you with the compute and networking requirements that you would get if hosting on premise. Based on this comparison in a shared multi tenant public cloud do you think the same rules could apply to allocation models of cloud workloads?Rate of change of public cloud a problem?Public Cloud adoption is happening at a very fast rate, in future I assume public cloud providers such as EC2 are going to start to hit massive problems with not being able to facilitate large volumes of customer requirements and I also predict that public cloud is certainly not capable of facilitating concurrently every single customer that has ever laid eyes on a public cloud Virtual Machine in such providers. Therefore I believe that to succeed, Public Cloud providers are going to seriously need to look at the level of service they can potentially offer and design an algorithm similar to what Airlines have developed within the Overbooking model. Remember you are not always guaranteed to get the seat on a plane that you always want but most customers are happy to take compensation in return. Interestingly the likely compensation from a public cloud provider is not likely to be high if you fail to get what workload you require....SummaryI admit that using this comparison between Cloud providers and Airline reservations is quite a cynical view, but putting this into perspective my view is that EC2 and any other public cloud provider that is struggling to control who is able to buy a workload and who wants to use a workload is going to hit massive PR and Customer relation problems just like you get when an airline unfortunately overbooks a flight with 20-30 economy passengers.In Part Two I delve into various areas and technologies that exist today in the Airline reservation world [...]



Abstraction....love it or hate it?

2010-01-10T17:38:37.925+00:00

So I am an Infrastructure guy surrounded by massive volumes of technology in the industry which operate above Servers, Storage, Networking environments to enable certain goals. Within the technology some have a level of abstraction (or virtualisation) to provide a level of "ease" to make portability and migration easier between the lower level and upper level component i.e. VMware server Virtualisation or SAN Virtualisation arrays, we also have lower level components that we just don't realise like proprietary volume managers/file systems.Unfortunately though the industry is still despite being full of such glorious technology plagued with any kind of easy migration and flexible movement capability and by this I mean some of the following examples that I hear and see about day to day in the industry;Replicating/Cloning from one SAN Array Vendor to another,Migrating a VMware VMDK File format to an alternative Hypervisor vendor such as Microsoft with VHD,Migrating from one NAS vendor to another NAS Vendor, Running a J2EE workload between two middleware stacks i.e. Weblogic and Glassfish,Restoring backup jobs from one vendor onto another (even within the same vendor) Fortunately with the above common examples you do have some technical options, for example on the SAN replication problem you can use a SAN Virtualisation appliance like a Netapp V-Series or a IBM SVC, however you do need one of each appliances on the target and source, for starters this is expensive and you also have support issues with this from the underlying storage array vendor, you also have various other potential issues that may arise all to achieve what is merely just copying data from A to B (maybe not that simplistic but im a simple guy remember). To address cross Hypervisor migration, within the Server Virtualisation industry we have the Open Virtualisation Format (.OVF). As Server Virtualisation is more my bag I will use this example for the rest of this post. With OVF the industry has got together to build a standard for portability of Virtual Machines, bear in mind before you run off shouting EUREKA this isn't live migration, to migrate requires minimal downtime as it only works on cold migrations which is still a pain in the rear, however this means you can in theory move from one Virtualisation vendor to another by using export/import capability with OVF. Unfortunately even with functionality that enables us to address the common interop problems in the datacentre such as replication between different array vendors and migration between Hypervisor Vendors, being humans and never satisfied the most common groan I hear is that they never actually provide 100% confidence and functionality that the original abstraction layer did, so for example with OVF you can't migrate "a la" Vmotion style between hypervisors. They also add large volumes of overhead and support, you need someone to operationally support VMware, you need someone to operationally support SAN Virtualisation arrays etc, and additionally they can end up actually costing more if you do not do complete TCO analysis on the actual solution being acquired to address the problem in the first place. This leads me to the conclusion even with the limited knowledge in IT I have on such topics that unless large volumes of innovation and collaboration occurs we are never likely to see such technology or initiatives occur where we the customer or end user have this reduced overhead penalty for portability. Being the cynic I am, I am seriously starting to think that solutions that address problems are merely a level of abstraction that is pushing the problem faced higher up the stack for something else to be affected or to deal with, additionally the increase in layers means yes you got it more support and TCO costs. So based on my theory for example lets look below at how much abstrac[...]



Goodbye 2009...hello 2010!

2009-12-28T17:29:20.187+00:00

Well what a year, and what a turbulent ride 2009 has been for anyone based within the Technology sector with job cuts and threats, diminishing sale and revenue for our beloved vendors/resellers (meaning less free lunches at VMworld) and general doom and gloom whatever which way you lookwith the increased prices of all important things such as Laptops, Storage, Ipods and...I mean I mean...bare essentials such as food and drink and clothes.I feel after a rough ride I have come out moderately ok and had a successfull year, I started this blog in January and really felt it wouldn't go anywhere, I've tried blogs before and to be honest had problems writing suitable content that would grab good audience let alone bring sensible comments to blog posts (as in someone not offering to enlarge one's gentalia or claim a lottery win in Lagos.)2009 has certainly been an interesting year, I have acheived the following and hope to strive to improve upon this in 2010; Managed to hit approximately 20000 visitors on the blog which if you'd said to me last year i'd have laughed at you,Made a post at least (work dependant) 1-2 times a week,Have had a few Planet v12 top five mentions which was very cool, thanks for these Duncan it was really encouraging, I have also been linked on websites for external resourcesHad good fortune to meet a lot of interesting people in the blogosphere who have also commented about my content and at least said its good (to my face anyway :))Passed and upgraded to VCP4! great as I can now focus on moving forward and away from practical based qualifications until ESX5 (I hope)So onto 2010 and some general New Years resolutions for me on the blog; Will hope to publish less technical content and more content which is based on strategy and methdology, this will be more the case I think in 2010 as I'm not hands on now in my day to day role,I have left massive volumes of content out of my blog in 2010 due to me just not feeling it was worth the bother of posting, this has been an irritant when I see someone else post and get kudos on there own blog...I hopefully will not do this as much in 2010I will move off of Blogger!...the CMS is just a frigging nightmare and not worth waisting hours of my year in 2010 on....this may additionally bring a rebrand for VMlover.com, i have decided for the blog to go up a level I need to get rebrand so I can gain more target audience.May try to bring external resource and knowledge from industry players and thought leaders with guest content, this will depend on whether I get interest, if you are interested then do comment on this post.So I wish you and your family a very Merry Christmas and a Happy prosperous New Year and keep posted on the blog as your presence and comments are gladly appreciated.Daniel[...]



PHD Virtual - breaking into banking!

2009-12-17T21:53:42.766+00:00

Thought i'd share a interesting news feed I found on the PHD Virtual website on a recent success story for the PHD guys http://www.phdvirtual.com/company/press-releases/142-esxpress-ramatically-reduces-seattle-financial-groups-vmware-recovery-time

You may have read my review on esXpress 3.6 back in August, my opinion on them is they are certainly focussed in on resolving general backup pains and problem areas and are certainly serious about backup technology and methodology. If you get time over the festive period I seriously encourage you to have a look at the fully usable product demo.

This customer success story certainly shows that their products are capable of providing enterprise level backup and holding there own on large scale backup demands, look out and expect to see more great product developments in 2010 in the core esXpress product set and additionally I myself wish them continued success in 2010.



Views on Automated storage tiering

2009-12-14T20:00:54.358+00:00

In light of the latest introduction from EMC of its latest AST (Automated Storage Tiering) solution (funnily enough named FAST), here are some quick and easy to read documented predictions for you to maybe rip apart on the comments section of this post;Array designI believe storage arrays and storage disk layout will be designed and planned a hell of a lot differently than they are planned today. Typically functional requirements will become less and less taken into account when planning storage for an application or planning array deployments from scratch. With introduction of automatic array tiering into mainstream I see design considerations being based more on the overall disk capacity requirements and potential capable over subscription limits that can be achieved, and also ensuring that relevant SLA's based on workload characteristic are guaranteed i.e. Between the Hours of 10-12 this batch process will get X amount of IOPsThis I relate to in the same context as planning deployment of workloads into a VMware farm that uses shared HW resource capability. The "farm" or "pool" of storage will become the associated norm with the shift of responsibility of the storage admin being more on calculating and understanding the running capacity and available expansion space on the array with the AST algorithm calculating and reporting back the available location where workloads can be moved to back to the storage bod. (it is safe to say though that AST is going to certainly be in manual mode for even the bravest of storage admins for a while).No thy workloadIn most storage environments today we plan and over allocate for the worst case IOPs or MBPs requirement of the workload and in the event of any potential issues arising they get dealt with in a reactive manor. If it does what it says on the Marketed Tin AST will make this type of planning irrelevant, we won't need to know the workload, it will move it for us.If limited performance metric on app profile is available in the first place (which I expect), then AST enabled arrays will provide the option to monitor post deployment and then have the peace of mind that you can migrate with no downtime to the running workload. Meaning advanced tiering provides the Administrator with greater opportunity to turn reactive issues into a proactive scenario by having greater visibility of what the application does (and whether the vendor is lying about requirements). Additionally I expect (and hope) vendors with AST functionality will provide tools which provide expected "Before and After" results of what will occur by moving storage that doesn't sit on on AST enabled array into an AST enabled environment. I also expect to see an onslaught of third party software companies providing such a facility (if they don't do this already).Adoption rateIn my opinion the latest incarnation from EMC of FAST will most certainly not be deployed and adopted in aggressive fashion within Storage infrastructures for a while, the feature is not back supported into Enginuity for DMX3/4 so only the rare customer who has bought a VMAX or CX-4 recently will be one of the few capable of implementing this technology. Additionally FAST isn't free, expect to see people keep there hands in there pockets until this technology has been provenHigher storage tier standbysSSD's are an expensive purchase, and AST enables the possibility of being able to introduce sharing capability of SSD between workloads (if planned appropriately), you maybe in the position to oversubscribe the use of SSD between applications. If an app that runs in the day needs grunt in the shape of IOPs, it can share the same disk pool with an app that requires throughput out of hours.Virtualisation and ASTExpect to see AST benefiting and working heavily at API level with VMware, [...]



Exchange 2010 - Infinite Instance Storage

2009-12-11T16:27:07.775+00:00

It has been a while since my last post, i've been busy on a lot of fronts, i've been revising for my VCP4 Exam (which i passed :) ), working heavily on projects at work and also had a holiday.After my hectic month i've now had the chance to catch up with the latest Exchange 2010 product changes and felt compelled to post on what I discovered within this. It appears that Microsoft has removed from 2010 Single Instance Storage functionality/Single Instance StorageIntroduced in Exchange 4.0, SIS (or Deduplication) ensures that attachment files that get emailed to multiple people are basically only stored as a single master file to avoid storing multiple copies of that file within every user mailbox, so from an overall Exchange Database perspective multiple sent files appears on used storage as just the size of a single file.You were probably as dumbstruck as I was when I read about EOL of SIS on the lastest Microsoft blurb, and your probably thinking to yourself exactly what I did which was there must be a new type of SIS or a new fandangled name for SIS that either improves upon SIS or even completely new architecture to save on storage consumption all together. Well it appears it was neither of those...I digged deeper and came up with the following blog post to that it is now completely EOL.It does seem that like me readers of the official Microsoft blog have great concerns on the architectural changes and the side effects within a typical large scale environment of implementing 2007 compared to 2010. Additionally echoed are concerns on what types of problems it will lead to in future within an operational environment day to day. To be frank Microsoft sound a bit blase when providing justification on why they have removed SIS, they seem to infer that technology like SIS is legacy and customers do not actually benefit from storage reductions, and in fact SIS is being removed to provide performance benefit.How Microsoft can measure that SIS is not usual anymore is beyond me, Exchange customer use cases are all different, but in reality in the field the actual fact regardless of what Microsoft think SIS however small has benefits to reduce storage costs for most organisations, additionally it has made things in Exchange more efficient in other areas such as reducing backup windows and the associated restore times for Exchange databases.The justification from MS seems to be that today compared to 4-5 years ago Disk is cheaper and bigger and yes they are right, it maybe cheaper when they go and compare this to DAS connected environments I have no issues with this. However my issue is that most large organisations like mine do not use DAS for large scale Exchange and large enterprises don't do this with Exchange due to some of the following reasons;DAS does not provide Volume snapshot capability for backup and restoration activityDAS Storage volumes cannot be replicated for any purpose to a secondary offsite or local arrayBackup windows with DAS compared to using a SAN are not even worth providing examples of the difference, backup across the wire with DAS is unquestionably for large volumes of Exchange data going to be slowerYou cannot clone a DAS storage volume nondisruptively in the background and quickly like you can on SAN, this is usefull for things that you should regulary perform such as Production backup integrity test.You have dependancy with DAS between host and storage, you can move/change a Fibre connected server much easier than DAS.Try providing cache priority or QoS to a DAS volume!Try managing DAS remotely and from central consoles!On a TCO front a SAN most probably provides you with much better cost savings and operational savings compared to having pockets of large storage pool with DASI'm not a SAN Bigot (maybe j[...]



Does HP-UX provide historic roots to take on VCE?

2009-11-09T22:27:09.148+00:00

You might be wondering why I have mentioned HP-UX and VCE in the same topic for this post and come to think of it your right, firstly though you maybe wondering what HP-UX is? Well its HP's Unix OS which ships and runs on their Integrity line of Server class....it's a rock solid Unix platform and runs as the big Iron in a lot of companies that I know of, it has built in partitioning (or more Containerised Virtual capability), it provides comparative performance to other alternative RISC based processors, has ported to Itanium, has integration, high availability and management layer etc. So cutting to the chase the reason I've posted is this....With the recent VCE announcements VMware/Cisco/EMC have put the wind up most big vendors in the last week I am sure by finally sh%ting and getting of the proverbial pot and creating an alliance, and fair play to them its a dog eat dog world look at how HP acquiring EDS shook the world of Services...

So to date HP in its armoury has the Storage, has the Server, has the Services and Integration with EDS but one main important ingredient missing in this equation to offer customers that alternate and what is it???? Yes the Hypervisor....

With HP having previous experience of running a development program Internally with HP-UX it is quite possible they could quite potentially release a Hypervisor themselves with the capability to build a commodity converged solution to completely compete and go it against something like VCE. Think Xen based hypervisor with openness naturally built in to provide customers with the portability and flexibility to move Images between other Xen based Hypervisors and dare I say it hybrid clouds with EC2....? HP have the internal resources I am sure to do this, they have done it for years with HP-UX already.

So this maybe a zany thought but when you look at Oracle having Sun/and Virtual Iron portfolios under its belt, and now VCE being fully announced isn't HP seriously going to suffer if it wants to concur all like so many of the vendors at the moment in the Datacentre space? Somehow I don't think pushing the Hyper-V and Xen OEM deals more agressively in spite of VMware being part of the VCE alliance will be enough to not lose at least some larger customers....



SSD....maybe not as widely adopted

2009-11-06T15:05:20.669+00:00

This news feed made me chuckle, its on how STEC shares plummeted this week due to EMC cutting back on an intial order with them for Shiny fast expensive SSD that they make for DMX/Vmax.

EMC are very clever in how they market and promote the success of relevant technologies, in fact the top tier vendors are king at this in the world of Tech and I really respect them for this it goes with the clique of trying to flog a Fridge/refrigerator to an Eskimo as people (not me) will most likely buy them under this presumption.


2009 has certainly been the year the SSD gets marketed and promoted as the killer disk medium for high IO intensive applications. At most events, in most magazines, vendor blogs etc they have pushed them like crazy. However after the STEC news I think it is quite clear that 2010 will be where the hardwork of trying harder to shift very small capacity and very expensive SSD disk will be coming for EMC!



Cloud fail? Get outa here...

2009-10-13T22:00:43.977+01:00

So I thought I'd post my views on the Tmobile/Danger outage along with the massive amount of views that have been plastered all over the world of twitter and large volumes of cloud bloggers. I wont go into technical detail on the outage, I'll be going philosophical on the whole thing. First question being asked is was it the Cloud that failed? First things first, I (and many others) have mainly been pissed off with the fact that a vast array of short sighted people and reporters have quoted this as being a "Cloud" failure, I wont go into definition of Cloud as I've got many posts that provide this (and still continue to do so as Cloud Evolves), but what I will say is that this outage was by all means not a Cloud issue. Yes I repeat it was not Cloud issue....the issue was a managed service issue, a service that was being managed by Danger that T-Mobile outsourced to Danger, and Danger were obliged contractually by Tmobile to provide relevant services. Turn the clock back 5 Years and this outage would have been described as just that... an Outsourcing failure. Its important to state that the delivery method of the Cloud didnt fail to end users (the internet)....the Cloud Computing/storage strategy didnt fail (hardware did)...the crux is the managed service companies obligation and duty of care to operate it successfully failed. Now flick to the other side of the coin....Danger were paid to provide a service and software to Tmobile, that payment was probably bare minimal cost base and tightly screwed down on overall price with absolute zero investment in any extended expensive SLAs with them, and obviously as Tmobile is the only operater with egg on there face here they were the only Operator to use Danger services and thus most likely the primary source for Infrastructure Investment fund for avoiding such outages, as anyone in business knows a single large customer is never enough SO before your eyes is a complete vicious circle of why this thing was doomed to fail...Two companies hedging bets basically on a business model supported by shoestring Infrastructure services and trying to reap every reward going financially. The Cloud is not to blame, it is greed....."Everything happens for a reason"Now that rant is over I provide reflection on why I think this outage was a good thing. Sadly with the evolutional theory of natural selection this little chick of MS Danger had to fail to make the current leaders in the Cloud industry stronger and more capable. And with this outage I wonder how many of these Cloud/Managed Service providers reviewed backup and continuity strategy this week ;). Think of it as a lesson learnt, a lesson unfortunately learnt by Tmobile whom I am sure will settle in court and make lost revenue back and the consumer whom will also gain recompence.And this lesson is one that can be taken on board by every aspect in the dark world of IT services and outsourcing, this goes from the bare basics of the Outsourcing of IT Services model where "Your mess for less" mentality is strive through to the bleeding edge side of IT within the Cloud world of services such as Amazon EC2 and many other cutting edge suppliers that are still yet to break the adoption curve into mainstream due to less high profile outages.Additionally on this Mr Vinternals provided an excellent view point where he highlighted that example issues can additionally be used for you and I who most likely struggle to educate management on the phrase "you get what you pay for", and I add to this that the lack of visability on what your purchases and using for services no matter how good or reputable a service provider is should never be taken for granted. An example of this i[...]



Cloud "Are you ready" - Part 2 - PaaS

2009-10-04T20:53:09.568+01:00

Part two of my "Cloud are you ready" series focuses in on Platform as a Service or more commonly abbrieviated as PaaS. Now one has disclaimers with part 2 and 3, and that is I am an Infrastructure guy, I am not skilled and greatly knowledgeable on application architecture, that withstanding I am aware of emerging trends and strategy which is present in most of the cloud based technology and the emerging strategy from vendor PaaS offerings. Hopefully you won't be disapointed that this will not focus as much on cloud readiness as Part 1 as it will focus on fast facts picked up during research activity on current PaaS solutions and strategy. I wouldnt want to blog about something as a little knowledge has potential to be dangerous, I merely provide running views and opinions gathered.Myself like many Infrastructure bods are seriously having to learn a lot about PaaS and SaaS strategy extremely FAST, this certainly shows that the Infrastructure bods which used to just be only concious of the bottom of the stack need to focus and diversify on alternate datacentre strategies and methods for delivery of core services. I am mightily interested and also quite concerned by the potential power of PaaS and SaaS to change the model for how Infrastructure delivery methods work. I am concerned if I don't get a grip on the basics of PaaS and SaaS I will be left out in the cold and app teams will start to scale environments and supportive IaaS which with "Cloud" simplifying strategy I am very skeptical this won't turn into the same as issues that occur in common architectural strategy by app bods doing this in todays world. Now PaaS i've heard of that... PaaS is growing exceptionally fast and has large volumes of industry interest, one of these reasons is mainly due to PaaS having app development at its heart still and the fact that coders beaver at code and they can still code happily away with PaaS. New PaaS strategy means there isnt "as" a initial massive shift compared to say changes that have occured in IaaS on how things get delivered.Key advantages with Hosted PaaS cloud solutions for developers is they can code anywhere with anyone through central repositarys across the internet, example PaaS cloud providers include Microsoft Azure (currently in beta) and Google Appengine, you also have the PaaS purpose built frameworks such as Adobe Air and MS Silverlight, additionally some charateristics of PaaS include;PaaS development frameworks are used to build components and services presented at the SaaS layer, examples include .Net, PHP, Java etc,PaaS applications have potential improved intelligence capability to interact with the IaaS layer and provide developers with more statistic, I will emphasise more on this later in the post,PaaS is built with more openness, PaaS providers are building platforms (and this is more not a complete u Turn) with SDK's and frameworks that support a cross range of different frameworks, even Microsoft with Azure are doing this with things like a Java SDK.PaaS intelligenceThe potential areas and intelligence for PaaS developers to exploit within its architecture is immense, for example if you take a PaaS built application the developers in almost real time can interogate and investigate every intricate detail on what consumers or users of the application are doing and what they want from applications, this is mainly due to the fact that the PaaS services hosting the SaaS applications are running across the internet and across various service buses. From a security standpoint this is very important to consider if you are considering using a public PaaS cloud provider such as Microsoft Azure or Google App[...]



Abiquo Abicloud

2009-10-02T16:16:11.243+01:00

Came across this little gem today http://www.abiquo.com/en/products/abicloud It looks like a very slick open source freely available Private and Public Cloud manager.

The product video shows some screendumps (although the women narrating sounds a bit strange) which is quite cool and has similar features seen in a lot of the cloud managers.

We do seem to have an emergence of these with Players like CohesiveFT and Eucalyptus being the dominant players, I will probably have a look into this and post some reviews/comments on it.

Thanks

Dan



Who is next?

2009-09-28T21:15:22.939+01:00

Perot Systems bought last week by Dell and then ACS go this week to Xerox so the question is who will be next?Other big names that may possibly go include;ATOSCSCFujitsu Cap GeminiComputacenterLogicaSteriaIt is amazing how much consolidation is occuring and how quickly in the industry today and its quite alarming what the big powerhouses are prepared to do to jump onto the IT services bandwagon, Especially when you hear about Outsourcing pricing being whittled down to bare minimal profit margin with customers demanding much more out of there contracts.Now if you were Xerox or Dell why would you buy an IT services integrator and provider? Many thoughts come to mind such as;Entry into the IaaS spaceBuild a portfolio of offerings which will be ready for the economic upturnImprove indirect sale, they will claim to remain agnostic but the longer term goal is to adopt customers to there platforms/solutionsQuash competition, namely IBM, Oracle and now HP after EDSFuture PredictionsOracle buy FJS or even possibly CSC, and why do I think this? They both have credible history and large customers within the Government and Public sector, provide further international coverage with multiple continental reach (especially FJS) and lastly a possible for consideration for purchasing Fujitsu is it builds the current SPARC chips and Oracle has commited to staying in the Hardware business so I expect will want control on the production line. Another is Cisco are a possible to buy Accenture, they work together on current engagements apparently (how cute) and I see them probably being closely matched in regards to target audience. Accenture is very expensive and has a large price tag but Cisco is one of the few orgs who are capable of buying them HP/EDS stylie. Cisco might even go for an Indian Outfit like TCS or HCL, they are into emerging markets and they probably have large outsourced operations over there already so maybe a good cheaper mix?Whatever the next acquisition is it will raise an eyebrow, with emerging countries making the world a smaller place and price of companies being at a lower value due to the economy it is bound to happen.[...]



Virtualisation V1.0...V2.0...V3.0

2009-11-06T22:42:29.713+00:00

In between doing some work on the Part 2 of the post series "Cloud are you ready" I thought i'd post in between with some thoughts related to Virtualisation strategy within the datacentre and how I think we need to change and knuckle down to start to get Virtualisation to do what it is supposed to do for businesses....do more with less yet provide more agility and flexibility.Virtualisation version 1.0 in where the industry performed basic DC consolidation and sat reaping the benefits drinking Kool Aid has and still is currently the predominant phase. In V1.0 the Hypervisor provided organisations with extremely good performance factor, clever and turn key benefits to consolidate along with supportive functionality such as Vmotion, DRS, HA and excellent provisioning opportunities. It probably sounds a bit unfair to sum up current Virtualisation based on the hard work VMware and its development team has done so far, but I am sure that even VMware would admit for x86 Virtualisation to truly succeed in the datacentre and more importantly impact and become the defacto server platform within the Datacentre it needed to diversify. This diversity was evident, they bought companies such as Dunes (Lifecycle and Stage Manager) and Akimbi for Lab Manager so hopefully this post doesn't come across like i'm saying VMware was not business ready or serious about Datacentres...they are and they got the hell of One app per server and Physical Bloated server consolidation off the ground so we could SAVE BIG BUCKS/POUNDS to invest into IT someplace else more effectively. Enter the dragon So enter the next phase of Virtualisation, stage 2.0 strategy which is built upon the feature rich business focused products that has evolved and grown in ecosystems of the big virtualisation companies like VMware. The stage still predominantly in the infancy phase (when say infancy stage I think my statement was backed up recently when about 5 hands out of 50 at a local London VMUG knew what Lifecycle Manager was) V2.0 builds upon and uses the solid Virtualisation 1.0 consolidation fundamentals. It uses the later end developed enablement technology in Vmware such as available orchestration tools, chargeback, capacity management tools and self service portals to streamline IT delivery and services. Further afield of Virtualisation strategy and one step higher up the technological stack is Cloud Computing (or strategy), this combines large amounts of the technology that surrounds the V2.0 Ecosystem such as web based service catalog interaction and billing, the granual chargeback tools, rich API and many more to provide this in many different service delivery methods to organisations. And whether it is a Public or Private Clouds they all have to use such technology and process automators to ensure they meet cost and service expectation.So the question I have is do we see V2.0 strategy completely in full adoption yet within the industry and ready to evolve further? Based on feedback from events and from blogs etc I'm not entirely sure it is, Maybe I'm not asking the right audience but I do feel adoption is not large enough yet to say it is there.Based on the fact that businesses want to do more with less and how how extremely process centric IT is today I do feel it is the right thing for IT in organisations to move to this V2.0 engine and fast. Without this automation and interaction more with our business processes we will suddenly start to hit the same issue that were hit with Physical sprawl and our Virtualised worlds will mean an untenable situation to provide justificati[...]



Cloud - Are you ready? - Part 1

2009-09-17T23:40:35.306+01:00

Cloud computing and overall Strategy in its current lifecycle is mostly at the early adopter stage. However with the popularity of methods such as metered server usage, shrink and grow capability and per hour driven costs for workloads, based on what Businesses want from IT with the limited budgets available today means the strategy is quickly turning it into a solution which will most likely become mainstream in the next 2-3 years as a common core business strategy for delivering and supporting core services.This Three part series of post's will hopefully provide an overview of each the component layers that formulate a cloud strategy, I will provide detail on example attributes that exist within the relevant component layers and I also provide some operational and architectural readiness advice to be able to ensure that you can exploit the cloud strategy and reap inherent benefits.Cloud stacks Cloud Computing is not a tool or piece of hardware you buy, it is not software you buy off of the shelf and it is not something you can buy from an SI or an Outsourcer, so to your average IT professional it has very confusing definition statement about what it actually is which can be very misleading when trying to adopt and preach the strategy to internal business leaders and potential sponsors for your initiatives. For this series of posts I define each component that builds up the typical Cloud Computing stack. There are many views on the definition of cloud computing is, upon observation in the proprietary world it appears that Cloud Computing is being defined into the "as a Service" stack layered component approach, from which I fully agree with and embrace for simplicity. Below is a diagram (or attempt) which shows the component layers that build the Cloud stack and the relevant attributes within each Relevant "as a Service" section that we commonly know today in most areas of IT and the datacentre; The important thing to note with the above shown Cloud stack is that each component and running attribute within is portable and naturally decoupled from each layer and can easily shrink/grow on demand according to business requirement. For a model example of something within this stack is a typical Web application, for this app the underlying Server workloads are needed (IaaS), the Middleware to run session data and to broker info to/from a database is needed (PaaS) and the presentation layer of a Web application that feed data's from the end users (SaaS) is used.Cloud Computing is defined as all of these services and components glued together to become a complete extensible engine for your Datacentre. To gain a more broader picture of how the Cloud and inherent stacks fit together privately and publicly check out my post on the future vision I have on operating cloud services across public and private based cloud infrastructure. So is your current IT ready for cloud?Adopting a service oriented view of hosting infrastructure whether Privately or Publicly means you will need to introduce some changes and amendment to how you operate IT today and how your Business processes operate. You probably think you are going to struggle to adopt a cloud strategy, however you are not the only organisation who won't initially be able to successfully transition to cloud based services with your current IT and process. Most organisations big or small have barriers in the way for adopting new strategy, for the larger enterprises is usually a financial barrier, current outsourcer lock in, or a people political barrier and for smaller s[...]



VMworld 2009 Aftermath

2009-09-08T20:32:18.574+01:00

After attending VMworld last week I was pleasantly surprised to see some of my previous blog posts almost being based what VMware are using as future strategy, things such as the use of vCloud and IO DRS were two in particular, and no I have not had an NDA for many months.The event was about average compared to previous years, vendors were strive with Marketing in full 5th gear and the word C*oud was being used on I kid you not probably every vendor stand except the beer stands on the welcome party. Amusing as it may sound this was rather frustrating, I am a believer in cloud and the architectural principles that surround it however simply bolting on the word cloud onto your sales pitch simple doesn't cut it with me, if anything it sends me the other way.Back to the show and on the Tier 1 vendor stands I got to spend some decent time with Dave Graham talking EMC Atmos, checked out Vmax, on the Cisco stand I saw some of the cool UCS kit (and had a laugh selling it to some random guy), spent some time struggling to find out if Layer 2 was only supported with Long Distance Vmotion (don't ask), spent moderate amounts of time asking questions on the VMware stands on new tech, and got plenty of free useless goodies that I will probably never use/wear.Once i've had time to do some time studying sessions I attended in more detail and look at sessions I wasn't able to attend I will be most certainly writing some core material. Amongst many these will include discussion on Spring source and vCloud direction in further detail. After attending detailed sessions on vCloud I've started to understand in more detail how vCloud will work, I will endeavour to provide some technical learning material, views and opinions on this. Will provide some posts on Storage IO DRS based on what VMware.Keep posted folks!!![...]



Cloud - Likely to bite you on the arse

2009-08-26T23:29:23.963+01:00

I overheard an interesting discussion today between some application guys (who are purely fictional) , they were saying that they had spoke to another respected Application bod from another company at a recent networking meet up who are currently using cloud services for application services, they then proceeded to say they will be looking to engage with a cloud provider currently offering a Cloud Application based service currently in beta (which is also fictional as this one works ;), additionally one of the fictional guys said they could see excellent benefits that can be reaped by turnkey production environment roll outs and development environments that can be built up on demand in the "Cloud". I'm sure this type of conversation alone is enough to put the fear of god into any Infrastructure bod, from my stand point yes its scary to hear this stuff, however the context and goal for this post is I am only concerned with the issues and approach on how cloud is being adopted and not what the Cloud replaces, I am an advocate of Cloud and this type of talk should be embraced in IT departments and between teams. I do worry though when I hear this type of discussion as it means cost and IT budget goes to waste when it should be being invested using the correct methodology and process to implement new solutions into organisations. My immediate thoughts with a conversation like this is if cloud adoption is approached in this manor it has potential dangerous consequences for IT departments as a whole. My concern with this type of "shortcutting" lie with a possible emergence of business application peeps being blinded by the cloud marketing vendor hype, and being blinded like moths to a lamp in the way that the actual cloud providers are making the service offerings seem relatively simple to adopt for applications you currently have. Most Infrastructure guys will know that for years we have had to implement and design around poorly written and designed application stacks, and yes this isn't just bespoke apps it includes COTS and proprietary middleware/databases that we now struggle to virtualise and avoid having excessive server sprawl from by having to employ strategy such as one app per server whether virtual or physical. So enter into our lives the cloud provider that can offer the opportunity to Change current deployment process for business applications and services, bypass any current Infrastructure design authority and the people that make the engine run today that supports the current bloated app stack. All seems so easy doesn't it, it would do to an application bod, they tend to think differently and approach IT differently to Infrastructure types. This is one of the reasons why we struggle to sell server virtualisation to application bods, they want platforms that provide above and beyond and not just "enough" workload, they simply don't understand consolidation and to be honest why would they? If application peeps approach a Cloud strategy in the strategy that I explained I think it will no doubt end up being great for the cloud providers but I feel this will end up being bad for the organisations who adopt it in such a blase manor. My reasons to suggest this are around the fact that we still have many current unanswered questions on Cloud in Enterprises, and the fact that cloud is still at the bleeding edge stage of the adoption curve. Using the Cloud without ensuring that you have a finely tuned hosted application may actually mean services could (and [...]



Tech Review - PHD Virtual esXpress

2009-08-20T15:45:46.869+01:00

I posted a few weeks ago about PHD Virtual technologies release of esXpress version 3.6, on this post I delve into each new feature and the benefits that you can acheive with your virtualised backup strategy;Company BackgroundPHD Technologies was formed in 2002 and they are based out of Mount Alington New Jersey, USA. They have well over 1600 customers and this customer base includes big enterprises such as Siemens, Barnes and Noble, Tyco and are extremely popular and big in the SMB Space and the Academia space.Technology BackgroundPHD provides organisations with a cost effective and robust solid backup tool, esXpress is now in its 3rd generation and has been designed to combat the problem of backup in Virtualised estates since the ESX 2.x days. Now in the 3rd Generation release esXpress provides core functionality and technological advances that give PHD a leading edge against other players in the space and allows them to compete for Enterprise custom due to the extensive technical offerings.PHD utilise VBA (Virtual Backup Appliances) to perform backup and restore from either a connected RDM Disk to your Virtual ESX Hosts, an NFS Share or even backup to a VMDK file. VBA's are something PHD has architecturally used within its product set since 2006, they enable backups to be performed with the following benefits for your Virtualised estates:Negates the Need for VCB agents/proxys so Esxpress can be used to remove the need for agent level backup on VM's hosted on entry level ESX license versionsRemoves the need for backup agents within VM's or Using VCB meaning you can virtualise more due to reduction in Host overhead.In current environments using VCB it reduces the need for a VCB Proxy server thus reducing cost of this server and any SAN requirementsEach backup VBA can perform 16 concurrent jobs, competitors max at 8 (more detail on enhancements later)VBA's are completely fault tolerant in that they are VMotionable to another Host in the possible event of a disaster striking VBA's can be installed extremely quickly with the OVF import functionality.Esxpress 3.6 benefitsVsphere 4 Support3.6 provides full support for this and also has a VC Plugin which is supported on the vCenter client. Additional admin is performed either via a Web interface which provides you with the ability to manage the backup environment from any desktop and not have to worry about having to install GUI's wherever you go or want to check and manage backups. Data DeduplicationMassive reductions in backup space requirements can be acheived by using this feature. With VM's being backed up with VCB using conventional backup products like Backup Exec or even Netbackup performing inline dedupe operations is not capable natively by the underlying Backup software, you need an appliance or piece of additional software to do that at additional cost. esXPress 3.6 Provide Data deduplication inline completely free of charge within your license entitlement. Dedupe in esXpress is claimed to provide upto a dedupe ratio of 25:1. The real magic in esXpress happens on what PHD call there Dedupe appliance, this dedupe appliance is built using the PHD San appliance which is available to download additionally at http://www.phdvirtual.com/products/virtualization-utilities, this appliance can provide you with the option of using shared VMFS across Local DAS storage on your ESX hosts without buying a full backend SAN. When using the Dedupe appliance and backup method incremental VM res[...]



As if by cloud magic...

2009-08-16T15:15:28.239+01:00

Since my last post where I discussed the Microsoft Azure cloud service and highlighted that I felt companies competing in the cloud space such as VMware are now diversifying past the Hypervisor to reach aspirations of Cloud, and then Vmware go and do me a favor and make me look like i'm a visionary and extrememy intelligent and buy Spring Source!!!.I have had all week to digest the views and opinions from various Industry analysts and bloggers and see what they are thinking and saying in general on the Spring acquisition and now heres my attempt (ARSE COVER DISCLAIMER - I am not a Software Architect/coder/guru/white sandals & sock wearer so excuse any rubbish) at trying to predict where the purchase will lead VMware's current business model and what will evole from the acquisition, lastly I also highlight what the industry needs from any of Vmware's clouds offerings.VMware - Now not just a virtualisation company?VMware are now kick starting themselves into providing multi tiered service offerings, and with the latest acquisition are slowly creeping up the layers and stack past just the underlying Datacentre Virtualisation technology. This natural growth is mainly I feel due to demand from the industry and current customers for more agile cloud based services and coverage across more of the datacentre.The Springsource acquisition enables Vmware to move into the PaaS (Platform as a Service) market, this effectively means that they can provide organisations with an end to end operational stack starting from the underlying Virtual Machine workload, through to the presentation layer for orgs to run JVM type workloads and webservices. With this solid framework VMware can provide presentation layer building blocks alongside technology such as vApp and they go further than providing just the solid underlying platform infrastructure which is present within the VDC-OS intiative. Customers can also benefit from the confidence instilled through partnerships and alliances formed in the Datacentre virtualisation boom with the likes of EMC, Cisco, IBM and HP. What will also attract any customers is the fact that Spring is open source and common knowledge to most people in development meaning any future developments and offerings won't be a closed shop like Azure (We Hope). Overall within the cloud space I believe VMware will probably have the competitive edge to attract Enterprise customer base when pitched against other competitors in this space such as Microsoft and Google. This is mainly due to current core datacentre infrastructure values and them being the company respected as being the ones to establish the defacto baselines for capable Server Virtualisation and consolidation products.What does this mean for an Infrastructure bod?Some people maybe asking themselves (like me) "Well VMware have bought these guys what does that mean to me and my current investments in VMware technology". From my observation the Spring acquisition isnt going to be a tactical point purchase like Propero, Beehive and Dunes, I see Spring being the key driver for Vmware to start to be able to provide a sustainable reputable driven cloud based offering for current enterprise based customers and for new potential customers that would be considering Azure and Google Appengine. Looking at available hooks within Spring and by the Spring.NET and I presume you will even extend and be able to use the app components available upon Microsoft A[...]



Azure - Microsoft's new baby

2009-08-09T20:35:38.068+01:00

I've noticed last week or so we have had a bit of a rise in Hyper-V bashing, so I thought it was time to write a post on where I think this whole thing is going longer term rather than just talking about the today comparatives of the technical nuts and bolts of the underlying Virtualisation tech/Looking at how Hyper-V has performed within the analyst popularity stakes since its release I don't think market share is still any better for Microsoft, analysts are saying that only really Citrix Xenserver is the competitor with single percentage gains on market share on Vmware and if anything Uncle Larry at Oracle is likely to succeed in gaining more uptake on Oracle VM due to the Sun acquisition finally closing.My view (if you want it) on Microsoft's strategy for Hyper-V is that they are now shifting concentration to the Azure Cloud platform and not the underlying Datacentre server virtualisation platform. My justification for this is that in all reality to them Cloud is seen as almost a software layer and an extension of services within your datacentre today such as Exchange and SQL. Azure being hosted within Microsoft's datacentre will no doubt rely on Hyper-V but to be honest it is the interface to components that will be where Microsoft concentrate on exploiting, they are not in the game to make money from a Hypervisor this is why it is an inclusive product to Windows 2008.Microsoft would never be taken seriously running capacity planner exercises and on engagements to work with you to configure optimal infrastructure platforms to gain large consolidation ratios, and I feel neither will Gold Certified MS partners (most are VMware resellers anyway). Instead I feel Microsoft will stick to what they know best and have the developer teams internally being capable to run such a beast within the application arena, this then provides them with the ability to kill two birds with one stone and concentrate on similar strategy to Google whilst gaining foot hold in the Cloud service space. There is speculative rumour that the domain www.office.com has been bought and registered for a new online version of Office probably arriving at release time that 2010 Office arrives, also MS has Exchange 2010 on the roadmap for next year which is going to be tailored for Cloud environments so if at least anything else, moving focus in the Hypervisor arms race with Hyper-V and flogging a possible dead horse would have the potential negative impact of losing ground on Mr Page and Brin. Microsoft will no doubt spend less development money on Azure than designing Hypervisor related technology, most of the development I expect will come from development blueprinting done on things such as Live Services, MOSS and Collaboration tools such as MSN messenger. And you don't necessarily need server virtualisation capability to run a cloud, you can provide ASP like services with just a fully optimised application stack, this is something that Microsoft has a better chance at providing with current portfolio offerings such as MOSS and Exchange and future technology on the horizon meaning no need to focus on the underlying platform. Cloud Computing still has a rather large volume of unanswered questions and it is still very much a bleeding edge stage for the technology, it is clear though that even the likes of VMware are not focusing on the Hypervisor platform as much and are having to diversify and concentrate effo[...]