Subscribe: A New IT World
http://itphasechange.blogspot.com/atom.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
applications  code  hypervisor  interesting  microsoft  server  service  services  sips  soa  stack  virtual  vmware  windows 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: A New IT World

A New IT World



The world of enterprise IT is changing - from server to desktop and everywhere in between. You've heard the buzzwords: service architectures, loosely coupled applications, virtualisation, serialisation, network storage, network processing. These are just



Updated: 2014-10-03T08:26:32.585+01:00

 



Moving bloghost

2006-02-13T15:58:38.196+00:00

I'm moving this blog to Wordpress.com.

Future entries will be at itphasechange.wordpress.com. I've already made a copy of all the content there - so please update bookmarks and RSS feeds.

The new RSS feed is at: http://itphasechange.wordpress.com/feed/.

I'll be tidying up the new site over the next week or so. See you there!



The Hypervisor Wars

2006-01-11T17:43:16.536+00:00

I've realised I've mentioned the idea of the hypervisor wars without explaining what I mean by it. The underlying virtualisation technologies used in Intel's VT and AMD's Pacifica curently only allow a single VM Manager to run. This means that the VMM (the hypervisor) installed has an incredible amount of power - it controls what runs and how it runs. Install yours first, and the machine is yours - especially if you lock your hypervisor into TPM or similar security mechanisms. So what would the hypervisor wars mean? Firstly an end to the open systems model that's been at the heart of enterprise IT for the last 25 years. If Microsoft and VMware fell out, VMware could reduce the priority of Windows partitions. Other hypervisors might have licensing conditions that make it impossible to run non-free OSes as clients. You could end up with a situation where each OS installation would attempt to insinuate its own hypervisor onto the system partition. Security partition developers may find that they are only able to code for one set of hypervisor APIs - locking end users into a closed platform. The end state? Co-opetition breaks down, the industry becomes enclaves built around hypervisor impementations, and the end user finds that they're unable to benefit from the possibilities of an open hypervisor architecture. Can we avoid the hypervisor wars? Optimistically I think we can. There are pre-requisites. We need an agreed hypervisor integration architecture, and we need it quickly. Let VMM developers compete on ease of operation and management, not on who controls your PC. Technorati Tags:



How long before there's an Apple Hypervisor?

2006-01-11T19:43:09.530+00:00

One thing to note about the new Apple Intel machines is that the Yonah chipset supports VT. With Apple saying that they'll let Windows run on their hardware, the question is - will they let a third-party hypervisor run? I suspect not - especially if they are using TPM in secure startup mode. Of course, they'll first need to enable VT in whatever BIOS they're using... So will Apple produce its own hypervisor, or will it badge a third-party tool? My personal suspicion is that Apple doesn't have the skills to write it's own hypervisor (there are only a limited number of people with the deep combination of hardware internals and OS knowledge required, and they're mainly at Microsoft and VMware) that they'll announce a partnership with VMware at the WWDC. Unless Apple's been hiring the Xen dev team on the sly... Apple will quickly need to gain the high ground in managing virtualisation on their platform - as they'll need to maintain contol of OS X running as a VM. Otherwise, will Apple be the first casualty of the hypervisor wars? Technorati Tags: , , ,



Opening up the Lightroom

2006-01-09T19:51:26.056+00:00

Adobe's new Lightroom is, as they say, the bee's knees.

Fast, responsive and ideal for working with RAW images, it takes the best of CameraRAW and Adobe Bridge and turns them into a one stop shop for basic image manipulation and comparison. Best thought of as a digital lightbox, its adaptive UI makes it easy to hide the elements you don't need and just concentrate on the images. An image workflow tool, it helps you manage how you work with images - and how you capture them.
Lightroom Beta lets you view, zoom in, and compare photographs quickly and easily. Precise, photography-specific adjustments allow you to fine tune your images while maintaining the highest level of image quality from capture through output. And best of all, it runs on most commonly used computers, even notebook computers used on location. Initially available as a beta for Macintosh, Lightroom will later support both the Windows and Macintosh platforms.
Which means it runs quite happily on my aging G4 PowerBook (unlike the G5 optimised Aperture)

That's not say that Lightroom is competition for Aperture.

This is more a first look at how Adobe is rethinking what people are doing with the Photoshop toolset, and putting together the beginnings of a script-controlled service framework for its next generation of imaging applications. It's a model that fits in nicely with a conversation I had recently with Adobe's CEO Bruce Chizen (which should be in the next issue of PC Plus), where we talked about Adobe's strategic direction after the Macromedia acquisition. I'll leave the conversation to the article - but one thing, I think Adobe are one of the companies that bear watching over the next 3 to 5 years.

(I'm glad I can talk about it now - I saw it in December, and was very impressed at the time - unfortunately I'd had to sign an NDA.)

Betanews notes that there won't be a Windows version until Vista hits the market. I'm not surprised. I strongly suspect that Microsoft is working with Adobe to make Lightroom one of the apps that will be demoed at the Vista launch. The UI of the version that Adobe demoed back in December would work very well on WinFX - it's ideal for XAML. Microsoft has had Adobe on stage showing proof-of-concept XAML applications in the past, so having it showing shipping code at the launch would make a lot of sense...

Cross posted to Technology, Books and Other Neat Stuff

Technorati Tags: ,



Manage Your VMs

2006-01-05T12:18:25.206+00:00

Here's a useful post from the always interesting Scott Hanselman, linking to hints and tips on how to use VMs more effectively.
There's a number of generally recommended tips if you're running a VM, either in VMWare or VirtualPC, the most important one being: run it on a hard drive spindle that is different than your system disk .
It's good advice. I'll be moving my set of VMs to a seperate SATA drive on my main PC. However, sticking them in a fast USB 2.0 drive looks to be a sensible approach as well.

An interesting thought occurs - will we see hardware designed for hypervisors and hardware virtualisation coming with many hard disks? Or will we see a caching layer used, passing operating systems into partitioned cache RAM?


Technorati Tags: ,



Opent the APIs and they will come?

2005-12-17T21:34:16.606+00:00

It's a truism of the service world that open APIs mean more developers working with your public services. Google is a good example of this, and it's doing it again by opening up its talk service with an interesting set of functions as described on TechCrunch . Libjingle looks very interesting (and probably something for me to think about with my Server Management messaging editor hat on). Quickly looking at Google's announcement we see a collection of tools that could make it a lot easier to build collaboration applications:

We are releasing this source code as part of our ongoing commitment to promoting consumer choice and interoperability in Internet-based real-time-communications. The Google source code is made available under a Berkeley-style license, which means you are free to incorporate it into commercial and non-commercial software and distribute it.

In addition to enabling interoperability with Google Talk, there are several general purpose components in the library such as the P2P stack which can be used to build a variety of communication and collaboration applications. We are eager to see the many innovative applications the community will build with this technology.

Below is a summary of the individual components of the library. You can use any or all of these components.

  • base - low-level portable utility functions.
  • p2p - The p2p stack, including base p2p functionality and client hooks into XMPP.
  • session - Phone call signaling.
  • third_party - Non-Google components required for some functionality.
  • xmllite - XML parser.
  • xmpp - XMPP engine.
Looks interesting. The related Google Talkabout blog has just gone on to my blogroll...



Platforms and stacks

2005-12-13T10:50:17.606+00:00

I've written a bit on the idea of stacks as a key component of next generation computing environments, but they're only part of the story. Once you've implemented a stack, and are using it to deliver services, you need to group the services together, and add a management layer to show usage and predict future operational needs. The resulting architecture can best be described as a platform - as it's the foundation for a range of SOA processes. Amazon has been slowly turning itself into a platform, and they've just turned their search engine into a public managed platform. Alexa's been around a long while, but it's turning itself into a set of services - managed (and priced) using a utility computing model. An interesting move, from an SOA pioneer.



Sun becomes Wilkinson Sword

2005-12-05T00:39:09.103+00:00

While I noodle away at my thoughts on licensing for the next generation of IT systems, Sun is being suprisingly innovative. Not only are they moving their software sales model to support services, but they're also using the same model to get developers onto their hardware. In the US you can get a shiny new 64-bit Opteron powered Sun Ultra 20 Workstation for only $30 a month (payable a year in advance). Sign up for 3 years support for Sun's OS and dev tools, and the hardware comes free. An interesting approach It'll also be interesting to see how the rest of the Java tools world responds. Will BEA start giving away its tools, to drive people to the AquaLogic and WebLogic platforms? Time will tell.



Reusing interfaces

2005-11-25T16:01:48.650+00:00

Richard Veryard has some interesting things to say about reuse in the SOA world. It's a problem I've been thinking about, too - but from a very different direction. Reuse isn't just about using the same piece of code again and again across your business' many applications. It's also about ripping and replacing code without affecting all the applications that use it. In the past reuse has been avoided as this element of the philosophy could have undue effects on key business operations... SOA changes the status quo. The key seems to be that effective SOA demands what I think of as "interface first" design. Often thought of as "design by contract", this approach fixes the properties, methods and events offered by a service. What it doesn't do is define the code that delivers the service elements. If an application only needs to be aware of a service's interfaces, then an application instance can be switch from using service V1.0 to V1.1 without affecting operation, as long as V1.1 offers the same service interfaces as V1.0. A major change, V 2.0 could still offer V 1.0 interfaces at the old service URI, with new functions at an alternative service URI. Rip and replace without affecting consuming applications. A definite benefit of the SOA world.



Avalanche in (limited) operation

2005-11-24T13:20:46.946+00:00

It appears from this blog entry that Microsoft are starting using their Avalanche P2P distribution network in anger... With the shift to two year release cycles for stack components, and monthly CTPs, I suspect it won't be long before this becomes common practice for all betas and for MSDN.



A comparative release

2005-11-16T01:50:17.833+00:00

Windows Live is to Windows as Xbox Live is to Xbox. You can't have Windows Live without Windows - but you can have Windows without Windows Live. This is Microsoft showing that it has a presence on all the layers of the next generation computing stack. It's a logical move - and nothing to do with Microsoft's rivalry with Google. The MSN brand needs reworking - and bringing elements of it closer to the Windows platform makes a lot of sense, especially with the Vista wave of tools pushing Microsoft's XAML-powered Smart Client vision. I wonder how much the live.com domain cost the folk at Redmond?



Got a big database? Stick an API on it.

2005-10-31T23:17:41.236+00:00

Preferahbly as many as you can. Then you've got a really useful service that people can build into their applications. Here's one to look forward to: The BBC's programme catalogue.
It turns out there's a huge database that's been carefully tended by a gang of crack BBC librarians for decades. Nearly a million programmes are catalogued, with descriptions, contributor details and annotations drawn from a wonderfully detailed controlled vocabulary.
7 million rows of data going back to the 1930s. Wow.



Singularity: A Next Generation OS

2005-10-29T18:21:23.000+01:00

Here's a fascinating document from Microsoft Research detailing work on Singularity. It's an OS designed to support languages like Java and C# - so has been designed to support partitioned memory spaces, and to handle dependable code.
SIPs are the OS processes on Singularity. All code outside the kernel executes in a SIP. SIPs differ from conventional operating system processes in a number of ways:
  • SIPs are closed object spaces, not address spaces. Two Singularity processes cannot simultaneously access an object. Communications between processes transfers exclusive ownership of data.
  • SIPs are closed code spaces. A process cannot dynamically load or generate code.
  • SIPs do not rely on memory management hardware for isolation. Multiple SIPs can reside in a physical or virtual address space.
  • Communications between SIPs is through bidirectional, strongly typed, higher-order channels. A channel specifies its communications protocol as well as the values transferred, and both aspects are verified.
  • SIPs are inexpensive to create and communication between SIPs incurs low overhead. Low cost makes it practical to use SIPs as a fine-grain isolation and extension mechanism.
  • SIPs are created and terminated by the operating system, so that on termination, a SIP’s resources can be efficiently reclaimed.
  • SIPs executed independently, even to the extent of having different data layouts, run-time systems, and garbage collectors.
SIPs are not just used to encapsulate application extensions. Singularity uses a single mechanism for both protection and extensibility, instead of the conventional dual mechanisms of processes and dynamic code loading. As a consequence, Singularity needs only one error recovery model, one communication mechanism, one security policy, and one programming model, rather than the layers of partially redundant mechanisms and policies in current systems. A key experiment in Singularity is to construct an entire operating system using SIPs and demonstrate that the resulting system is more dependable than a conventional system.
Something to keep an eye on - this could be the type of approach needed to deliver modular OSes that run on hypervisors.



Flock on Newsnight

2005-10-27T21:23:00.976+01:00

Newsnight will be featuring a segment on the Web 2.0 movement and Flock in particular. While I'm a definite Web 2.0 sceptic, I think Flock is an interesting example of a next generation client application, providing a single (relatively) consistent user interface to a number of different applications that expose functionality via web services and other open APIs such as ATOM.



Lucite in Redmond tonight...

2005-10-27T19:12:20.193+01:00

There'll be some shiny lucite Ship It awards on those desks in Redmond tonight, as according to head of Developer Tools Somasegar's blog Visual Studio 2005 and .NET Framework 2.0 have shipped. The code will be on MSDN later today. Microsoft's first real SOA oriented development platform. I wonder what people will build with it...



Hosted Microsoft

2005-10-27T15:35:12.576+01:00

InformationWeek's article "Coming From Microsoft: 'Hosted Everything' " doesn't come as surprising after this year's PDC and a conversation Mary and I had with Orlando Ayala (MS VP Small and Medium Solutions and Partner Group) a couple of weeks ago... They're seriosuly looking at how they offer services to the SME market place. While tools like the Centro version of Windows server (think of it as Small Business Server for medium-sized businesses) and the Dynamics approach to business applications, there's a lot to be said for exposing platform components as hosted services - especially when you take into account the role of the Windows Workflow Foundation and Indigo - after all, most SMEs don't have full time IT staff, so how can you hope to have them use Axapta or even Biztalk? SOA for medium businesses is going to require hosted components - but components that can be remixed. Microsoft's history of a relationship with the channel and ISVs make me think that it will provide tools that can be adapted to work the way your business works, rather than the other way around...



Well, it is "meta" after all...

2005-10-27T15:14:31.586+01:00

An interesting blog entry and associated Infoworld article from Jon Udell on The many meanings of metadata.
The solution is a complex recipe, but we can find many of the ingredients at work in the emerging discipline of SOA (service-oriented architecture). We use metadata to describe the interfaces to services and to define the policies that govern them. The messages exchanged among services carry metadata that interacts with those policies to enable dynamic behavior and that defines the contexts in which business transactions occur. The documents that are contained in those messages and that represent those transactions will themselves also be described by metadata.
It's a concept that's closely related to what I think of as one of the key tenets of the SOA philosophy: Interface-first Design. You can't have loosely-couple applications working together if you don't know how you're going to wire them together. You need to have a defined interface that can be used first as part of a test harness while you build and QA your service, and then as a contractual relationship between two businesses. The interface is the description of a services capabilities. It really doesn't matter what sites behind the interface, as long as the interface is stable.



Flockery

2005-10-21T12:44:33.580+01:00

Flock, a Firefox variant for working with blogs and social bookmarking services is now a vailable as a developer preview. We heard a lot of buzz about it when we met up with the folks from various blog search companies in September. Now to see if it meets the hype. I've configured it to work with my main two blogs on two different services. It's relatively easy - as long as your blog host supports ATOM, and you know the URI of the ATOM API for your blog. You can download a build here if you want to give it a spin.



Play on (safely)

2005-10-19T18:43:19.080+01:00

Want to send a demo of your latest SOA stack to your users but don't want them to install a whole server just to look at what you're doing? You could give them a nice virtual machine image to try out, all loaded up with your code and a set of demo scripts. But do you know if they even have VMWare or Virtual PC installed? There's a quick answer in the shape of VMware Player:
VMware Player is free software that enables PC users to easily run any virtual machine on a Windows or Linux PC. VMware Player runs virtual machines created by VMware Workstation, GSX Server or ESX Server and also supports Microsoft virtual machines and Symantec LiveState Recovery disk formats.
It's a free download, so should make it easier to distribute demos as virtual machine images. So now all we need is a library of base images. VMware has thought of this too - so pop along to VMWare's Virtual Machine Center to find ready to run stacks from Novell, Redhat, BEA, IBM and Oracle - as well as VMWare's own Browser Appliance secure browsing tool. So, now for the big question, will Microsoft bite the bullet and do the same with Virtual PC? It looks like it has to.



More virtualisation companies come out of the woodwork

2005-10-19T15:08:15.080+01:00

Today's discovery is Parallels, Inc., who contacted us after our Guardian piece on Intel's processor roadmap. Looking at their web site, it appears that they're going straight for the hypervisor market:
Parallels Enterprise Server, expected in mid-2006, will be a pure-hardware server virtualization and management solution that enables IT professionals to create several isolated independent virtual servers running Windows, Linux, OS/2, or FreeBSD on a single host physical server. Parallels Enterprise Server’s pure hardware implementation pools hardware resources and then dynamically allocates them to virtual servers as necessary, ensuring that each physical server is used to its maximum potential, and that each virtual server always has the resources it needs to operate efficiently.
It sounds like Parallels' Enterprise Server will be something that will work straight with Intel's VT and/or AMD's Pacifica, removing the need for a host OS - and giving a fairly hefty saving on system resources. So who will have the first hypervisor on the market? Parallels? Xen? VMware? Microsoft? It's an interesting race that's lining up now - with plenty of competition and scope for differentiation and innovation.



It's the stack that matters...

2005-10-10T16:38:30.426+01:00

I've been thinking about what will make a successful company in the Computing 5.0 world. There's a lot of hype about platforms, but I think that things go a lot deeper. To be successful in tomorrow's IT world companies must be able to point to a stack of components that include their software and hardware - along side open standards that help them work in a heterogeneous environment. If you think that you will control everything, then you're unlikely to get anywhere. Very few companies seem to understand the stack, and how it fits together. EMC seems to have been one of the first to seize on the stack model, understand where it could add value, and make the appropriate acquisitions and partnerships to move forward. By understanding that intelligent storage and virtualisation were key components of any future service architecture, EMC chose its stack components carefully. Leaving open interfaces at all levels, it's in a process of integrating Documentum and Legato into its storage hardware - making rules-based content and object management part of your infrastructure, not your applications. It doesn't matter what else you add to the stack - EMC has made sure that it's in an excellent position for the next decade. BEA is another company that understands the stack. Focusing on middleware as the new infrastructure, and providing the tools to integrate with a range of different middleware technologies, as well as orchestrating processes, BEA is facing a challenging couple of years. However, if it sticks to its guns and keeps its focus I suspect that it will become a big winner. Its acquisition of Plumtree makes a lot of sense here, as BEA seems to have realised that its strengths lie in building the next generation of network infrastructure. Other players that seem to be making a bid for Computing 5.0 include Google, SAS, Microsoft and Yahoo!. There are companies that have failed to understand how the stack will fit together. Oracle has made the mistake of buying its customer base twice over with recent acquisitions. While its Fusion model appears to offer a "plug and play" middleware approach, there appears to be too much of a SAP-style reliance on fixed business processes and specific ways of working. IBM is missing the synergy between its Websphere applications and its hardware, as well as Lotus' knowledge management tools. Its cosy relationship between its platform business and Global Services is a danger as it could lead to complacency. Others are focussing on niches that are too small. Apple may well end up dominating the living room, but you can only do so much with iTunes. Adobe may make some headway with its Acrobat LiveCycle, but it needs to work more closely with companies like EMC. Otherwise it'll become purely a developer of UI design tools. Of course there are plenty of small companies out there who will bring their expertise to the table. The open source world is rapidly becoming stack-based, and companies like MySQL and JBoss seem ready to work with stack support vendors to produce integrated platforms. The blended model that BEA is attempting with Eclipse and Apache looks to be an interesting alternative.



Pedal to the Metal

2005-10-09T14:32:48.583+01:00

Andrew Ducker made some interesting comments about my last entry, where I suggested that new silicon technologies would significantly change the role of the operating system. One thing I failed to mention was that I was positing some future version of the .Net CLR that would focus purely on computation, and on network connectivity. This would significantly simplify the task of writing a "raw metal" CLR. Yes, Microsoft would have the problem of maintaining two different CLRs - a server/middleware version and a client version - however, the architecture that Microsoft is driving towards through the various Windows Foundations wouldn't preclude such a move. In fact it would make it easier - leaving Microsoft with a componentised CLR that could be tailored for specific tasks. After all, not everything needs a UI, and the .Net CLR is part of a mature application server platform. A componentised CLR would also allow Microsoft to synchronise releases of the Compact Framework with the full .Net system. The Windows hypervisor would be able to manage multiple "raw" CLRs effectively, allowing system managers to get much more bang for the buck without the OS overhead. And perhaps then Microsoft backends could take advantage of intriguing new technologies like Azul Systems network attached processing.



No more operating systems

2005-10-07T17:32:12.996+01:00

Intel's VT and AMD's Pacifica are probably the most revolutionary technologies around. Until the arrival of chips with these technologies, Virtual Machine Monitor products like VMware, Xen and Virtual Server needed to be complex tools that managed and intercepted low level commands from the client operating systems, and marshalled them through the host OS for execution. It's a process that can be slow (and often is). By providing a silicon basis for virtualisation, the CPU companies have changed the role of the VMM from a complex piece of software that needs to marshal OS functionality at an application level to that of a partition loading, marshalling and management tool: a hypervisor. Hypervisor managed client operating systems will have access to all the resources of their memory partition - and In fact, with a well-written hypervisor do we need an OS at all? There's work going on to deliver OS-less operations. We've already seen Intel demonstrate task-specific partitions with thin operating layers. But what if the partition was running a version of an existing virtual machine, like Java or .Net? BEA announced at its recent Santa Clara BEA World that it was working on a version of its JRockit JVM that wasn't going to need an operating system. It would be controlled by a hypervisor (possibly Xen) and run in its own partition. This is an important move for the industry - it completely changes the dynamics of the relationship between operating systems vendors and everyone else. If your J2EE containers can run in their own partitions, using network storage, then there's really very little need for today's memory-hungry operating systems on service servers - just load up a JVM with your container and your service application. Using a hypervisor it'll be easy to add processing resource as required - and move the partition from compute resource to compute resource. It's easy to envision a world where the OS is layered and partitioned across a number of virtual machine spaces. In some there'll be hypervisor-managed JVMs, in some security monitors, in some there'll be task-specific OSes (perhaps a web server, perhaps a file store manager, perhaps a desktop OS), all communicating through shared memory using TCP/IP and XML. I suspect that we'll see Microsoft delivering a hypervisor-controlled version of the .NET CLR in a similar time frame to their post Vista OS.



SOA Governance

2005-10-04T19:01:48.896+01:00

Managing the development of service oriented architectures will be very different from managing single application developments. For one thing, architects will need to coordinate the development fo services across the business, while juggling an alignment of their IT strategy with the overal business strategy. It's important to think about this issue - and there's an interesting paper in Microsoft's Architecture Journal from Richard Veryard and Philp Boxer on "Metropolis and SOA Governance".
Summary: In the service economy, we expect service-oriented systems to emerge that are increasingly large and complex, but that are also capable of behaviors that are increasingly differentiated. As we shall see, this is one of the key challenges of Service Oriented Architecture (SOA), and is discussed in this article.
It fits quite nicely with a piece I wrote for the Guardian back in March: The SimCity Way.
Managing software development in a large organisation can be tricky. IT directors need to juggle scarce resources while delivering applications and services that respond to business needs. It is a complex task, and one that often looks more like managing a portfolio of investments - or playing a particularly complicated game of SimCity.



Ning: a web-based social software UI development tool

2005-10-10T16:42:19.983+01:00

So Marc Andreesen's 24 Hour Laundry has left stealth mode and launched the first web-based development tool for social applications: Ning. It's worth looking at the Ning Pivot to see just what people are building. Ning's definitely a Computing 5.0 application - using a web-based social application framework as a front end to a wide range of web services. Its Content Store is an interesting tool - an object database with strongly typed data (yes, folks, it's the Newton's Soup for the web!). An XML programming language or a custom version of PHP help build apps that can use SOAP to connect to remote services, as well as Ning-hosted services. Interestingly all application source code is visible to all developers, so you can build your app on top of someone else's code - code reuse the old fashioned way. Layout guidelines make sure that all Ning applications look similar, with a standard structure for each page. There's also a set of AJAX tools to make it easier to design complex user interfaces - and instructions for linking to Zend and Dreamweaver as developer . This is going to be interesting to play with. I've signed up for the Developer program, so will report back on how things look from the code side of the fence.