Subscribe: iTKO Blog Moved to http://blog.itko.com
http://itko.blogspot.com/feeds/posts/default?alt=rss
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
blog  business  data  itko  new  process  service  services  soa  system  test  testing  time  virtual  virtualization 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: iTKO Blog Moved to http://blog.itko.com

iTKO Blog Moved to http://blog.itko.com



BLOG MOVED to blog.itko.com. SOA & Enterprise Integration Testing, Validation and Virtualization, Software Quality, and IT Governance discussion missives, with iTKO Founder/Chief Geek John Michelsen and other iTKO executives. Please visit the current



Last Build Date: Thu, 20 Aug 2015 05:00:26 +0000

 



Move to the New iTKO Blog

Wed, 04 Jun 2008 20:28:00 +0000

After more than 3 years of blogging, we've finally moved our discussion on SOA Testing, Validation and Virtualization to a dedicated new home within the itko.com site. So point your RSS feeds and news pages to: http://blog.itko.com. The new blog will feature much easier searchability, navigation and a simplified set of categories so you can get to the information you're looking for much faster. We've also researched and added links to a lot more blogs we like reading. And of course, expect even more frequent content updates, ideas and best practices from John and the rest of the iTKO team.



Can Virtual Environments take Performance & Load Testing?

Tue, 20 May 2008 04:32:00 +0000

We've talked a lot in previous posts about how the practice and technology of Virtualization really has legs -- it keeps moving forward, from hardware virtualization, virtual test beds, to virtual endpoints, to actually simulating the behavior of the software itself, which we're calling Service-Oriented Virtualization (or "SOV" if you need a TLA for it). Now we are seeing the Performance Lab getting into the action on this practice. For interconnected apps like SOA and serious enterprise integrations, the guys with the load testing firepower have tools like LoadRunner and SilkTest in their lab, but they get left out of the process until very near the end, when an interface is available. SOV can break that dependency of waiting for "all the moons to align" before they can get a test window. The initial uses of SOV were to allow the development and testing team to regain agility much earlier in the lifecycle - so they could do their needed functional and regression testing against Virtual Services instead of constrained live applications -- the essential services, databases and mainframes in the environment. But that same virtualization is perhaps even more valuable in the perfo(image) rmance lab, if you can apply serious load testing to it. The constraints of having a realistic environment and test data to test and develop against is holding these teams back from finding performance issues much earlier - so we can gauge SLAs (service levels) at the component level. And in SOA - where you are dealing with services and underlying systems that are distributed and constantly changing, replicating that whole environment is incredibly costly and time consuming. With a virtual service environment, the performance team tests the component they are working on with their existing load testing tools, and virtualize the rest of the system dependencies and data away. Rather than add hardware and bandwidth, just virtualize all of that, then see if it is indeed the hardware, or more likely, something in their component or its response to variable, changing data that is causing the bottleneck. We had a good conversation with analyst Theresa Lanowitz from voke - no stranger to advising companies on the application lifecycle and ensuring quality - about this concept. She's going to be co-hosting an upcoming webinar with iTKO's chief geek John Michelsen, and InfoWorld's Test Center editor Doug Dineley on this topic on May 28: http://www.itko.com/site/resources/vsewebinar052808.jsp Hope you can join us for this event, if not, we've written a paper on performance testing in a virtual environment, and we'll continue to talk about this practice here.



WS-FalseSecurity: Are you at risk?

Mon, 24 Mar 2008 15:07:00 +0000

In our previous post, we mentioned how the notion of standards in SOA can be just the opposite. This is largely because the standards are generally generated by whatever tool you are using to create services – it’s not something you construct by hand. “Compliance” is never a 100% proposition. Well, I wanted to take a moment to talk about the idea of Security in an SOA world. We’ve seen some similarities in this space – particularly, that WS-Security is thought of as a means to ensure that SOA applications are actually secure in practice. We've seen from our customers that QA and IT operations teams actually do have access to test mainframes in production - to run test suites and validate that unauthorized parties can't get in. That level of rigor around access got harder to maintain when we moved to a services-based model of software -- but the security technology of some leading vendors and platforms is doing a pretty good job of validating and keeping up with WS-Security, even if it comes in several proprietary flavors in execution. However in reality, compliance to web services standards, like WS-Security protocols, will never even come close to providing the level of security we need from enterprise applications. Most of your business logic and behaviors -- beyond ensuring the “hookup” for the services themselves is using some form of this protocol -- are happening far below the service in the functioning apps and systems of record. These systems are owned and managed by different teams, inside and outside your domain of authority. So what is the answer? More detailed testing of the OASIS definitions of commonly encountered violations and loopholes at the WS-Security layer? Automated unit testing of security protocols is important – we do it in every engagement. However, we need to remember that a set of 100, or 10,000 different protocol checks is still just that – it is only testing for the kinds of attacks we already expect. We can set up wire fences and checkpoints, but what happens once you get past that point of entry into the application? Take for example a Sales department that is leveraging an SOA application to collect buyer interest and respond to customer opportunities, but is not automating its own process as a secure service. Since the management insists on a spreadsheet-based process, the sales guys copy or export the customer data into a spreadsheet, send that spreadsheet all over the company, then enter orders back into the SOA application based on the results of the sales meeting and customer calls. What happens when that spreadsheet inadvertently gets attached and emailed to a supplier? Or the IT guy who creates a handy service that scours the application for all available products and inventory, and then publishes that for a reseller to tie into their own order application. Soon word spreads, and 100 resellers are drawing data out of that service - creating a performance and availability nightmare that was perfectly authorized, but never intended. The biggest loophole in your systems hasn’t changed. It’s still the guy in the cube next to you, or the partner on the other side of the transaction. It is the unexpected behaviors of that next service you hook up with your own. The agility of SOA – its greatest strength as an architectural model – is also its greatest weakness. It just means that there are many unintended ways the Security of your applications can be compromised, besides the misuse or improper use of the WS-Security protocols. Real trust will not come about simply through compliance. Security in SOA will be a matter of coming to grips with the way an unknown quantity of users and other services could exploit your services in unintended ways, and how the services you depend upon can possibly compromise your carefully planned exterior when they are passing transactions through your application. That level of prevention simply can't be automated - it becomes a matter of [...]



Wring ROI out of your UDDI Registry/Repository by plugging in SOA Validation…

Fri, 21 Mar 2008 02:19:00 +0000

We’ve been talking about validating SOA Governance approaches for three years now, but surprisingly, we have found that very few enterprise IT shops of any serious scale are actually using them to their potential at this point. I had lunch yesterday with one of our wily gurus on this topic, Ken Ahrens, and he aptly noticed that the practice of SOA Governance just hasn’t kept up with the grand expectations we had of it. Why? It may be that these companies haven’t seen any tactical value to that registry, which they picked up along with the rest of their SOA shopping spree. They are more concerned with the integration of their existing and new technology assets. They found that while they can put all of the Service descriptions and locations in one place, it didn’t add a whole lot of incremental ROI if the developers in that department already knew about the Services they were planning to use. You see, a lot of people like the concept of Governance and having Policies, but it’s not necessarily practical all by itself for the way businesses construct and leverage applications. This happens because SOA Governance without Validation doesn’t provide any assurance that it’s actually meeting the business requirements set forth in the Policies. In essence, it is like posting a speed limit, but not having the radar gun to ensure that drivers are obeying the rules of the road. Think about the gap between what the BUSINESS is trying to do, and the actual IMPLEMENTATION of the technology that makes it happen. The further you are from the actual implementation logic, the more difficult true validation becomes: As you can see here, the more layers of abstraction you have, the less likely all of these layers will work together when you hook them up, and the harder it can be to validate behaviors at other layers that may affect the one you are testing. It can be a long road indeed to dig that deep. You might have a great idea of what you actually want to do, but that may not be grounded in a way that all of the teams are addressing it. This is becoming even harder in today’s economy, where you have a lot of partnerships, acquisitions and outsourcing – and therefore very little control over the day-to-day activity of your development teams that you are relying upon. Being able to connect a UDDI registry, where you store everything in one place, with a strong Validation strategy can be a big advantage in this environment. But to make it work, that process has to be continuous. There are several types of Continuous Validation: 1. Checking continuously at Build Time. Every time you are checking in a new service, you are automatically validating that it works before it becomes available as an asset. 2. Continuously checking on a Scheduled Basis: We call this the “belt & suspenders” approach of making sure that your applications are still working as described, as you may not know if the service was changed by another party, or brought down by a performance issue or dependency problem. 3. Leveraging UDDI to make your BPM and Integration tools, and their associated tests, find the appropriate services for the workflow they are validating. With UDDI v3, it's even easier to read your endpoints, and that can be integrated with many different tools. 4. Reporting on Usage and Value: Figure out which services are popular, and are becoming key components of the SOA architecture. For organizations looking to "trim the fat", this gives the SOA team the knowledge of where to focus testing, capacity planning, and future integration and development labor. All of the above are examples of how the Validation practice can add teeth to that SOA Governance effort. So we’d like to catch issues at Change Time, and validate each service then, but we also need the additional safety of checking structural, behavioral and performance factors at runtime, and reporting on the success [...]



So you have a SOA Management dashboard. How do you drive it?

Thu, 07 Feb 2008 18:07:00 +0000

Lately we've done a lot of research and publishing on how Dev & QA teams can ensure better quality throughout the software development and release lifecycle. But what about the IT Operations guys? It seems we are encountering a disconnect between the Enterprise Architecture and Integration disciplines that are moving to SOA, and the people who need to monitor and maintain these systems in deployment. There are a lot of companies starting to focus on better SOA Governance, which includes the management of business processes and workflows, and the services and applications behind that. Great stuff happening on that side of the IT shop. However, often the actual running applications are still in the purview of IT Operations teams. Don't get me wrong, it is a great thing that we have a team that is manning Mission Control -- hopefully providing an impartial report and monitoring function that is crucial, when you have live customers depending upon these apps in deployment. And there are mature tools out there that provide this kind of live "dashboard" of how the implemented apps are performing, like TIBCO Hawk, HP OpenView, IBM Tivoli, CA/Wily, etc. So why would these IT Operations teams be interested in SOA testing and validation? Well, it's really about "causing the cause" of performance and latency issues in these systems. If you are trying to prove that live SOA infrastructures are ready to roll, it's not enough to sit and watch the dashboard for that "Check Engine" warning light when a condition is caused by too many customers using those apps. The point I'm making is to use automated Test Execution as a way to drive expected and unexpected behaviors into your SOA Monitoring dashboards. This can be done both at integration time in staging, and against live applications, if you can provide a test window of time to make this process happen. For instance, we recently worked with a large energy industry customer who wanted to prove that their live implementation was able to handle a realistic profile of varied transactions in their system. So instead of waiting for a rush, they manufactured one -- by executing automated test transactions into the system, and monitoring the performance results within their Operations dashboards. Then they could feed the same data out to their SOA Testing platforms to tell the tests if they performed according to service levels. We recently put out a little news on this kind of integration - it's a pretty exciting way to apply Test & Validation to the Monitoring applications that have been around and evolving for some time: http://www.itko.com/site/company/news_65.jsp To me, that is a better machine - and it takes away a lot of that disconnect between Development and Integration teams, and the Operations people who are ultimately answerable for customer service levels.



The Continuous 'C'

Tue, 22 Jan 2008 15:00:00 +0000

Several weeks back I gave you a real life experience that typified why our test strategy we call the Three C’s has Complete as one of those C’s. This blog will give you a different real-life experience on why Continuous deserves “C” status. I was meeting with a Director of development at one of the worlds largest financial institutions. Many of their trading applications have absurd levels of complexity and variability along with, of course, radical performance expectations. Really cool stuff. A certain critical service was running on a major SOA platform. The vendor released an update to that platform, and the team responsible for this service ran their unit and service level tests -- doing more than most teams I’ve seen. Turns out, they all passed, and in fact the service performed its main function 12 milliseconds faster -- COOL! Then again, maybe not. They deployed the patch and watched their entire currency trading system grind to a near halt! Wait, you sped up the service and slowed down an application? Yes. Turns out the performance of the service prevented timing issues in the orchestration layer from surfacing in use by an application. Of course it wasn’t obvious to the team that this was in fact the issue -- they were blaming the vendor, then their new code changes looking for how they could have wrecked the performance. They went on for days losing millions until they figured out what was really happening. The Continuous C solves for this type of problem. Take those same tests you would automate at the orchestration and solution levels and run them on a continuous basis at those points of integration, pre-production. You’ll get immediate notice of this type of issue along with much clearer identification of the root cause.



Oracle buys BEA - Nice time to get a check-up

Wed, 16 Jan 2008 21:41:00 +0000

The biggest news in software may have already hit this year, but it's not like we didn't hear it coming for the past few months. Oracle finally makes good on its intent to buy BEA at $8.5B: http://online.wsj.com/article/SB120048691486294361.html To us, this is good news -- like when SoftwareAG and webMethods merged - it combines a strong integration firm with a global presence, with a technology leader in the SOA space. It's also a good move for Larry. Aside from increasing their customer base, it really moves Oracle more into the SOA mainstream - almost every customer we know has some Oracle apps, databases and integration inside their enterprise, but adding some leading AquaLogic ESB and Governance aspects of BEA will make a huge difference in advancing Oracle FUSION as an SOA platform. Oracle getting JRockit from BEA also bodes well for performance, as now they can optimize that for the Oracle application stack. For BEA customers, at least the anticipation is over, and they can continue to move forward with their plans with this reality in mind. I'm quite sure the best of BEA's stack will still be going on in the FUSION platform. Plus, for quality and reuse, consolidation of a SOAP stacks is a good thing. Both BEA and Oracle were operating their own SOAP stacks, and combining those particular efforts will create better interoperability for a whole lot of customers. Is this too much consolidation? There are still several choices for how you provision SOA - you can go TIBCO, SoftwareAG/webMethods, IBM or HP, now Oracle/BEA, maybe Microsoft, or even open source, and there are best of breed tools out there that still perform unique functions in the SOA ecosystem. See, SOA is still something you DO, not something you buy. The business requirements should still lead the technology decision in every case -- and that doesn't usually jibe with the "one vendor fits all" strategy. Where's the risk to BEA and Oracle customers? As a vendor charged with testing and validating platforms from both BEA and Oracle, this will create many new opportunities to help companies integrate and ensure business continuity as the story of consolidation continues to evolve. We recommend that whatever your preferred tools are, that you follow best practices of creating a set of baseline component and system-wide tests of your "as-is" environment, so that as these two platforms are merged and upgraded, you will be able to test and maintain Business Continuity throughout the process. This is equally important if you decide to move to another vendor as well. While we do believe that the combined new company will certainly make a commitment to customers, if you are in IT there is no shortcut to quality. Your best strategy is to establish a line of defense against unforeseen consequences -- with a thorough, and thoroughly managed suite of tests.



European SOA: Ahead of the Curve?

Mon, 14 Jan 2008 20:19:00 +0000

I'm in Germany right now, looking forward to a session at ZapThink's Practical SOA in Frankfurt event tomorrow. I'll be joining some peers from companies like T-Mobile, Swisscom, Novartis and SwissLife who are going to be sharing their own SOA success stories. Why does SOA seem to be moving forward a little faster in Europe than in North America? We've posed these kinds of questions in our surveys and forums, and often it seems that stateside, the term "SOA" can polarize some IT teams - it's an "either/or" decision at the architectural level. Talking to our EMEA director, Wilfred, he says part of the reason it seems adoption is high in Europe is that it SOA is often driven by developers working in smaller teams. Service-orientation can be started on a much smaller scale, and tested pragmatically before rollout to the larger organization. So, SOA there doesn't need to be an enterprise-level initiative in all cases, it can be something the company dabbles in before making a full commitment. The upside of this pragmatic approach is that the SOA project can leverage some small wins to pay for a bigger commitment. The downside? Companies still need to beware of publishing services without a higher level of architectural coordination between teams. This isn't to say that there aren't also companies in Europe starting from a strong enterprise architecture perspective and defining a top-down approach to SOA. I look forward to meeting both the pragmatists and the strategists this week! John



Feeling Crushed under the SOA Data requirement?

Thu, 27 Dec 2007 19:38:00 +0000

One of the most difficult obstacles to attaining enterprise-ready SOA is the sheer scale of the systems and data that need to be managed. To test the actual results of an SOA application, we need a very realistic set of data – both positive and negative – to input, and then get out of the environment under test. True, we can map much of our interaction with other Services according to the metadata we set forth during architecture and design processes. But when you get past that ideal model of connecting the endpoints, you still have the nitty-gritty of a CRM mainframe, or an SAP or Oracle Financials enterprise system, and the administrative owners of that system, to contend with. The data and business logic embedded at these layers has been added to and customized over the course of several years. So why can't we just have developers and testers work against the live SOA system? Well, those system administrators might be reluctant to provide access to key business systems in deployment. Beyond that challenge, getting a bed of realistic test data in place can be more than difficult – and hardware virtualization doesn’t scale to replicate the terabytes of data such an implementation requires. Implementing a complete mirror image copy of the system to test requires another enterprise license and implementation team – far too costly in scope. In addition, managing SOA data in order to do successful service development, integration and testing can truly be a moving target. It's tough to maintain that context of an actual user moving through the system, without actually having access to every implemented layer. The best practices for overcoming the data crunch isn't by any means an easy road, but it has to be done. It still starts with good architecture - mapping out realistic business workflows, and the metadata relationships that define them. Next, we need to capture as much of the data as we need to provide a realistic test environment for SOA. There isn't a way to replicate all of it, but we need to obtain enough to encompass most of the workflows we've defined. Virtualization of test beds, and the behaviors of apps as Virtual Services, can help you get to the point of reaching the 80/20 rule for the data you need most often.Finally, we need a strong SOA Governance approach to staging, promoting and deploying the application, which includes continuous testing and validation of the expected behaviors, and underlying data. No amount of simulation in development can account for all of the unforeseen consequences of changes in the deployed system.Part of our approach is to automate that process of data capture and modeling with Virtual Services. By monitoring a given Service and all of the live and test traffic that is going into, and out of it with LISA, you can get a pretty rich data set. Often people tell me that it must represent only a scenario where the data is not very complex. Not so. Though admittedly, the more complex your data set is, the more elaboration and checking you need to do on that data set. And what if there are data errors within your Virtual Services? Well, in a sense that is what you are looking to uncover, right? However, it is important to note that there is no shortcut for Continuous Testing & Validation. When you move into staging and deployment, you need to move from that virtual data model to actually doing continuous integration testing. The point is - to get 80% of the testing you need done at those early stages accomplished by using a Virtual Service data model, then you can have a more dedicated testing effort with less conflict against that live data in deployment. Obviously, there is much more to this process than I can cover in a post, but I hope you will seek out leaders and solution providers that can hel[...]



IT Predictions for 2008 -- Are Some Too Late?

Fri, 14 Dec 2007 20:05:00 +0000

(image) Yes, there are many pundits proffering predictions on the coming year, but these are ours. In fielding these over lunch at iTKO, we couldn't help but notice a few of them would have been last year's predictions as well... but, even in IT, not all change happens as fast as you might have hoped or feared. 1. SOA and Virtualization will come together at last. In 2007, Virtualization really came of age for the enterprise, thanks to the immediate hardware and configuration costs it saves. The two disciplines have traditionally been considered separate disciplines, as it seems impossible to Virtualize a bunch of distributed Services and mainframe apps that don't live in a centralized data center. However, there will be new ways to start Virtualizing these components and business logic in 2008. 2. BPM and BAM will continue to become more intertwined, and BPM will finally emerge to become the linchpin behind enterprise SOA applications, as compliance to standards and integration issues start to be answered for leading enterprises. 3. Believe it or not, the US Government will emerge as a leader in SOA innovation. Many American firms will lag far behind Uncle Sam on the adoption curve come next New Year. However, on the business side, Europe will lead the enterprise charge for SOA in 2008. 4. Offshoring will continue to increase to meet a growing technical labor and talent crunch, but only the most savvy and collaborative SI firms will succeed and grow to meet this demand. The age of sending work to a "body shop" will fade as businesses realize that without strategy and quality, outsourcing doesn't generate much net benefit. (A holdover from 2007 that is still happening.) 5. Applications as a service, such as SalesForce Apex and Force.com, will emerge as a truly viable alternative for enterprise applications. 6. Many enterprises will stall on SOA efforts, until they give up on the strictly "top-down" architectural approach. (How about this year?) 7. Integration platforms will continue to consolidate, but their customers will have even more heterogeneous systems to manage, as they will be merging and consolidating disparate IT infrastructures as well. (Here's another one from last year, but it's still going on.) 8. Services will start to go mobile as "micro-services" residing on thousands of devices, with light status-based updates feeding into, and out of, a centralized architecture. (Not quite sure what Jason meant here...) 9. RIAs and SaaS will drive further innovation in how multiple enterprises collaborate at a semantic level across heterogeneous systems. 10. NetWeaver-based SOA infrastructures will start to become mainstream in a certain segment of the market. Agree or disagree? We'd love to hear your thoughts and we'll see how many "I told you sos" we can collect out of 10 come next December!



Service-Oriented Virtualization: Encore Webinar Dec. 13

Fri, 07 Dec 2007 20:55:00 +0000

(image) Yesterday's webinar with analyst Theresa Lanowitz and our own John Michelsen on "Service-Oriented Virtualization" was well attended - so much so in fact, that our little event room reached capacity! So we are going to do one more live webinar on this same topic this coming Thursday, Dec. 13, at the same time (11:00 AM EST) -- and this time there will be plenty of virtual room! There is a signup here. There were two recurring themes of the hour covered by Theresa and John:
  • one, that NOT doing Virtualization at the data center and test bed layer simply doesn't make economic sense, and;
  • two, that SOA without Virtualization is a very hard path to take indeed, due largely to the difficulty of providing access to shared resources, which will constrain distributed development and test teams as they try to collaborate.
The questions were also great, ranging from "So is all the hype about Virtualization real?" to "How can I possibly virtualize complex data objects in my environment?" So if you missed yesterday's webinar (or missed part of it), join us and invite a peer Thursday for the session. Also, check out Theresa's blog on this over at voke. - Jason



Webinar on Service Oriented Virtualization

Fri, 30 Nov 2007 15:38:00 +0000

We have been doing so much with Virtualization in this blog that I have wondered if you guys are getting tired of it, yet almost every comment I get in person and most of the press requests are around the virtualization concepts for SOA. And you know we've recently produced some new thinking on how to leverage all this with our Service Oriented Virtualization white paper and released major product enhancements to take advantage of these ideas called the LISA VSE (Virtual Services Environment). To help put this in context we are producing a webinar with Theresa Lanowitz. Theresa has been a leading thinker in the ALM space for years and sees some of these same intersections between virtualization and SOA for tremendous value propositions. You wanna hear what Theresa has to say about virtualization and her thoughts on ALM 2.0. Theresa and I are providing a webinar on SOV next Thursday. You can sign up here.



Service Oriented Virtualization White Paper

Mon, 19 Nov 2007 05:27:00 +0000

Well you’ve seen quite a bit in this blog already about virtualization concepts applied to SOA. Scroll down if you haven’t :) It’s a rare opportunity when cool technology makes perfect sense to help enable an initiative that has tremendous business value. Virtualization up until this date has largely lived in the data center, where it has indeed saved significant hardware and configuration costs for a given set of servers. But this value has yet to extend to SOA, which is by nature much harder to replicate in this fashion. When you have so many distributed and heterogeneous technologies that are often not available for testing and development purposes, something needs to change to bring back that agility we expected from SOA. We at iTKO are really excited about our contribution to leveraging virtualization in SOA. We call it Service Oriented Virtualization, or SOV. We do so because we are identifying challenges encountered by so many living in the SOA world and articulating a solution to those challenges by applying virtualization. Please take a look at the SOV white paper. As always, I invite you to comment on the paper here. I would especially like to know how much you identify with the issues we raise and also how much you think that our solution will help.



Search SOA Articles

Thu, 08 Nov 2007 22:57:00 +0000

Rich Seeley, News Writer for the Search* Techtarget sites like Search SOA and Search Software Quality has been threatening his otherwise excellent track record by publishing interviews with me lately ;) So as not to repeat myself here, you might want to check out the following: SOA and REST, the interoperability conundrum (on a SOA vs. REST or WOA of sorts) http://searchsoa.techtarget.com/qna/0,289202,sid26_gci1281171,00.html A middle way to SOA governance ( A discussion on right-sizing SOA governance) http://searchsoa.techtarget.com/qna/0,289202,sid26_gci1281297,00.html I've done a number of interviews with Rich over the past several months. One of the many take-aways I've had is that he seems to be right on the pulse of what's being discussed regarding things SOA. I highly recommend keeping up with his work.



Business Process Validation

Sat, 27 Oct 2007 04:15:00 +0000

One of the primary goals of SOA is to model applications closer to the way that business processes actually function. One way to define a business process is through a language called BPEL (Business Process Execution Language, or “Bee-Pill”) that allows us to create models using business-process type terms that would represent the actual system behavior in the backend of IT applications. From a validation standpoint we try to take a simplistic approach--if we’re not careful--of trying to import these BPEL models as if they are tests of the processes themselves. The problem with this approach is that it ignores the fact that testing is actually a verification of business or technical outcomes, not a defined procedure. What we have to do is cause the business process to happen – think of that as “Step One” of a business process validating test. But the remaining steps of the validation may have nothing to do with the business process steps as understood by the process model. Once a business process is in motion, you have to validate the expected technical outcomes of the process. Let’s use an example: If placing an order within a SOA based application occurs by creating an xml document that represents that order and putting it on a message queue —let’s say that is the cause of a particular business process. Of course, the responsibility of that process as modeled in a BPM tool is to take the xml doc that represents the order and create the inventory transactions, create the shipping and order management transactions, update the CRM system and then close out the order or set the order status to some certain processed state. How would we validate the business and technical outcomes of this process? Well it would start by causing that process to function. Step 1 then is to create that XML document and drop it on the queue. Great. Then what? Well this requires the an understanding of the outcomes, not the steps of the process. But let’s hope they help shed light on this... We might have to check the system of record to confirm the order was placed. We might have to asynchronously listen on another message queue and interrogate the message payload for proper content to ensure the shipping system is properly instructed. We might have to refresh a web browser to prove that the end-to-end process completed in the acceptable amount of time. Alas, very little of that is stored in BPEL :( Then how can the business ever hope to validate thier business process models when the validation sounds like a bunch of bits and bytes? Ah, we have been working on that problem for years and have a elegant solution if I do say so. In another blog I’ll introduce the concepts and resist the urge to make it an iTKO LISA commercial.



LISA 4 Released, and here's my speech...

Wed, 24 Oct 2007 06:14:00 +0000

Warning: I'm taking a break from my usually pitch-free blogging to talk about our latest product release.
I tongue-in-cheek tell people that my day job is to run R&D for iTKO. It's true, but given that I am writing this blog on a flight that is day one of 6 weeks of covering the globe to discuss SOA Governance, Testing, and Validation with customers, prospects, conference attendees, and analysts, I think I need a better analogy than 'day job'... Yet despite my inability to see my team face-to-face, my team continues to deliver an amazing amount of product enhancements; the latest release of LISA 4.0 is just another example of that. We are always listening to customer feedback and incorporating changes. If you are an iTKO LISA user and have provided comments on a way we could improve our technology, I would bet most every one of you will be pleased to see your input reflected in this release. And while we are certainly doing a significant amount of tactical improvements, we have delivered major improvements in one area and two whole new product capabilities unique to the market. See why I love these guys? Load Testing: There is a product in the load testing area of our space that is quite dominant, and for good reason; it's a great product. In fact, many customers who adopted LISA for all their functional test and validation needs still kept this existing load testing product. We have been working with them to ensure that this is no longer required. Some still will and that's just great, but many will prefer to take that Collaborative 'C' to heart and perform all their testing from the same product. LISA VSE: We have introduced the LISA Virtualized Service Environment, or VSE. Wow this is way more than just a new TLA. If you've been reading this blog long you know I'm all about leveraging virtualization for SOA. I'm working on a white paper with Jason English for publishing in the next few weeks on this, as it goes beyond what VSE does when you apply it as a business development practice. In short, it's a dramatic improvement in how teams can better decouple for agility and how to eliminate the contention on all those shared resources during design/dev/test time. I highly recommend you ask us what we are up to here. And finally, we have produced a technology for getting much better visibility into the transactional behavior of business processes we call Pathfinder. This is one of those see-it-to-believe-it things, so again, check it out. Maybe the best part is we are so far from done I can't help but want to turn the plane around and start telling the guys what else to get started on.... SOA is still an immature technology pattern. Most teams I find are still struggling to get it right. And so much of what the market is still using are no more than repackaged traditional tools. No wonder it's harder than you think it should be ;) Pardon the LISA commercial. You'll see I do very little of that in this blog. Yet I'm convinced that SOA is scary without LISA on the team. For those who can do the math on an ROI, you owe it to yourselves to check out the full-featured LISA 4 SOA Testing & Validation suite.



Survey Results: What does the WS-Testing Community Think?

Tue, 25 Sep 2007 16:49:00 +0000

Almost a year ago, we released the first edition of our free WS-* only version of LISA, in LISA WS-Testing. Now as we near 10,000 downloads, we wanted to know how teams are progressing in their adoption of Web Services testing as a part of the greater SOA quality assurance process. We had hundreds of responses, split almost evenly between people in the business of making software, and users working for another vertical industry, and the group provided lots of great feedback and comments as well. Thanks to everyone who participated in the survey. Some of the industry high points of the results: Collaboration on Quality Outside QA - More than 60% of the users were not QA practitioners - showing that Architects and Developers are increasingly getting involved in testing.Test Your Own Services - A full 75% of LISA WS-Testing users say they are testing their own services. That is a significant increase over last year's estimate of just 40%, showing that almost all users of WS-Testing are at least in the early stages of building some services.Java vs. .NET Testers Steady - Same as last year, almost half (44%) of the testers are testing .NET services, while almost all are also seeking to test services on one or more forms of Java services/integration platforms (BEA Weblogic, IBM WebSphere and JBoss are still tied for the most users.) Since most surveys selected more than one platform, we can see that technology heterogeneity is here to stay.Load & Integration Testing is Key - More than 90% of the participants rated Load Testing and Integration testing as important features they would look for beyond functional and unit testing of Web Services. Web UI testing was also rated very high at 80%.How are we doing? We also asked the WS-Testing users if they were aware of some of the free educational and support opportunities from iTKO, and deeper features of the full LISA edition, and the results are encouraging. Forum participation. More than 85% of active users have visited the iTKO Forums for support questions. Thanks for asking (and helping your fellow peers), as the community of all LISA users builds out a knowledge base of tips and tricks for services quality. Visit the forums here (registration required): http://www.itko.com/forums.How can iTKO help? A majority of users asked for more training and getting started resources from iTKO. Stay tuned for more beginner and advanced testing training materials and videos.World beyond SOAP testing? 75% of the WS-Testing edition participants were aware that the full version of LISA SOA Testing supports Web UIs, EJB, databases, ESB messaging, load testing and more.Useful enough? More than 90% of LISA WS-Testing users would recommend the tool to a friend or colleague. And more than 80% said you were satisfied with the tool. We guess the 10% who would still recommend it but aren't satisfied still think the price is right... :) Thanks to everyone for your input and comments - it goes directly to our product team!And the winner is... Congrats to Larry from Southborough, and Deepti from Stamford, the respective winners in the prize drawing for the iPod Nano and the Amazon gift cards.[...]



New SOA Testing Survey from Aberdeen Group

Tue, 04 Sep 2007 14:02:00 +0000

We've been looking for some research to benchmark the maturity of SOA Testing efforts in isolation of SOA Governance as a whole. So when I heard Perry Donham was doing this over at Aberdeen, we wanted to participate in underwriting the survey and asking our users to contribute. A free copy of the report is available here: "SOA and Web Services Testing: How Different Can It Be?" Of particular interest was less emphasis on component-level testing, and from 63% (average) to 81% (best in class) finding complete lifecycle testing to be a key to success - and these firms are overwhelmingly measuring quality by lifecycle metrics (time to release functionality, process quality) instead of counting the number of bugs per KLOC. One area of concern was in the level of SOA training or expertise within organizations -- averaging 35% right now. A shortage of good architectural talent is certainly evident, but fortunately a majority of the responding companies are planning to address that over the coming year. Anyway, it's fresh research, grab your own copy and join the discussion. Thanks to all of the LISA enterprise and WS-Testing users who took the time to participate in the sample group for Aberdeen. - Jason



Virtualization and SOA – Part 4: Virtual Services

Mon, 20 Aug 2007 17:19:00 +0000

Last but not least of the three types of virtualization you might want to consider for SOA is Virtualized Services. I don’t mean virtualizing access to services as I mentioned in a previous blog, I mean simulating the service’s existence, without actually creating it. Now, how can that be important? Virtualized Services are especially important to achieving the dream of agile SOA testing: shorter, iterative, requirement-driven test cycles, with testing happening every step of the way. Why? Because if you want to test earlier, you will need to test incomplete components, or “in progress” integrations. SOA applications are particularly prone to change, so if you have to wait for a finished app to test, that becomes a bottleneck to agility. For instance our SOA testing product incorporates this approach. LISA has the ability to analyze a WSDL and to generate by itself the actual service endpoint. What this means is that before your development group has even shown up to build the actual service, our product virtualizes the service and creates a simulated version of the real thing (which doesn’t exist yet) – such that your consumers can invoke this service as if it already existed. Now, in a simplistic world this can be done in a variety of ways with mock objects, or with other methods that provide a specific hard-coded response whenever they are stimulated. But in reality you are going to need a fairly dynamic service, even in a simulation. You can’t really spit back the same silly response to every single request, because your consumers are going to need a lot more richness and variability in the way that the service responds. So, the ideal approach to SOA testing allows you to not only virtualize the service but to make the actual behavior of the service dynamic, i.e.: reading from database tables or spread sheets for values, or doing look-ups based on input requests. While it sounds like you are actually building the service (which in some ways you are), in reality you are providing just enough very high productivity logic within the service that you have virtualized so that the consumers get what they need without having to wait for the actual service to be completely developed and tested. In fact, there’s a great “test first” model here, where developers of the actual service will know they’re done when tests written against the virtualized service work on the actual service. The consumers are confident that they will get what they need from the actual service when the virtualized service has been sufficiently executed against. And with that virtualized service testing out of they way, you can now point the actual service consumer to the actual service producer, and therefore, likely have a much more successful integration from the very first touch. I hope this series on Virtualization in SOA has been valuable, and feel free to ask me for clarification or more commentary on this or any other topic. Previous Virtualization Posts in this series:



Virtualization Part 3 of 4: Virtual Endpoints

Wed, 15 Aug 2007 20:45:00 +0000

I’ve been blogging on different ways virtualization as a concept is very helpful in the SOA world. The second such area is in virtualized access to services, called a Virtual Endpoint. One of our SOA partners is Infravio, which is now a part of webMethods/SoftwareAG. As an example, these guys provide an ability to virtualize the access to a service, so that you create a looser coupling between your consumers and the service providers themselves. There are a number of upsides to this approach. The first is: no one wants hard coded URLs from machine to machine within the client side of their applications. There is a lot of brittleness to creating such a tightly coupled system. As a service producer, I may need to start scaling up to multiple hardware devices or inter-clusters, or I may need to start creating geographically distributed servers to support the growing application consumption of my service over time. So, if my consumers become hard-wired into the actual location of my services, I create an inability to scale and adapt my service over time. Virtualizing the access to a service thru an intermediary like Infravio gives me the ability to tell all the world to do UDDI lookup to my services, so instead of mapping to the actual service end points, the consumer can map to the intermediary. The intermediary can then use runtime policy or infrastructure availability rules to determine which of the potential end points I actually deliver the request to. In doing this, I’m able to manage and model how I want my consumers to hit the services that I produce. I don’t have to provide unfettered access to all, I can actually put policy around how, how frequently, and when that access happens. I can even start to distribute it, such that consumers may not even realize that they are accessing the service now locally to their geography versus remotely last week. As an SOA tester, I want this same level of flexibility in how we invoke and verify the behavioral and performance integrity of these services. By tying into UDDI repositories like Infravio, CentraSite, Systinet, etc. our SOA tests are dynamically pointed to the appropriate virtual service location, ensuring that the test workflow remains supported. This makes tests even more valuable and reusable, as they do not need to be re-wired for new locations as the systems are upgraded. In addition, the service provider can specify a “test channel” or test version for their services, so a test can identify itself as such to the registry and be provided access to that specialized channel, in cases where simulated transactions may not cause the same effects as live transactions. So, virtual endpoints are a great idea as you think about the scalability and the loose coupling we want in a SOA.



ZapThink whitepaper: "SOA Quality Across the Service Lifecycle"

Wed, 15 Aug 2007 03:41:00 +0000

The SOA analyst firm ZapThink just published a Magna Carta on SOA Testing on their site today -- certainly worth a read. It goes from the nascent aspects of service testing, into the concept of developing more agile overall SOA management. Quality drives the ability to rely on the system as it changes and pulls in more heterogeneous complexity. View the paper here: "SOA Quality Across the Service Lifecycle" http://www.zapthink.com/report.html?id=WP-0158 They focus more on how to properly manage SOA Testing from a business process perspective than a technology or tool perspective - but as we always say, SOA is not something you buy, it is something you do.



Virtualization and SOA: Part 2 – Hardware Virtualization

Thu, 09 Aug 2007 05:04:00 +0000

I mentioned in a previous article that there were three kinds of virtualization that a SOA practitioner would want to take a look at. The first of these is Hardware Virtualization. We certainly like this technology, especially in a SOA, as this can help us with deployment efficiencies and versioning of services. The more that we can isolate the configurations for all the individual services running, the better. It would be ideal if I could share one server-class machine among let's say 10 different services. But if I tried to put them all on one operating system, I inherently have configuration challenges due to the co-location of the services on one box. This creates challenges when different teams want the optimal environment for their service implementations. In fact there are some times when they are just downright mutually exclusive. That’s where virtualization of hardware comes in. I still get that one server box, but within it I set up 10 different operating system installs on one physical infrastructure, and I am able to independently manage all of those different machines each having its own configuration. Let’s say that one service provider is running on Windows 2003 and another is running on Linux. I can run both operating systems on one physical device, without having to try to figure out how to get my teams to give up the rwar on operating systems. This is a perfect example of how we can get lots of leverage from virtualization of the hardware. There’s another cool capability of hardware virtualization when I deploy a new version of service. In a typical deployment of service updates: - I have to change the code - I have to change the configuration sometimes - I might have to run database updates - and these activities may basically take my service down for what could be long periods of time. Clearly, all of the above might be a problem for my consumers. I can’t always schedule my outage times based on when all of my end users are okay with that. I may have global consumers – or not even know all my consumers. How could I possibly make updates given that there’s no particular safe time for that change. Virtualization allows me to create a new virtual environment for the new version of that service, to configure it appropriately and to flip a switch by bringing down almost instantaneously the virtual machine that ran “version 1” of the service, and bring up immediately “version 2”. Furthermore, what if this “version 2” deployment just blew up on me? Rather than try to figure out how to uninstall the new version, un-configure the system changes and roll back my database changes, I can immediately revert back to the previous version of the service, if a deployment challenge arises. Obviously as an SOA Testing provider, we find customers are able to move forward with a lot more confidence by validating their deployments using automated testing in conjunction with virtualized hardware. If the new configuration or build fails the battery of tests in production, the test fails and they simply roll back to the previous configuration, and treat the next configuration as a virtualized test bed for further refinement. Having an “assembly line” of “as-was,” “as-is” and “to-be” versions as virtual configurations goes a long way toward reducing downtime due to rolling out, or rolling back new functionality based on the success or failure of both conventional acceptance testing, and automated tes[...]



An Intro to SOA and Virtualization: Part 1 of 4

Thu, 02 Aug 2007 20:46:00 +0000

I’ve been asked a number of times recently by industry peers and technology journalists about “virtualization” as it relates to SOA. Well, there are in fact at least 3 distinct ways that you can use virtualization concepts in SOA, so I think that it would be good for me to give you a definition of those three, and then in the next few days I will blog on each one of them independently. Rich Seeley recently interviewed me for a SearchWebServices.com article on the first and most often mentioned type of virtualization I’ll introduce -- which is hardware virtualization. This is not a SOA specific thing.  This is when you’re running many copies of the operating system within one physical hardware device so that you can get independence of those several virtual machines from each other -- from a configuration, app server and operating system point of view, but leveraging one piece of hardware to do that. I’m going to talk about why you do that in a future blog. The second type of virtualization is virtual service end points. In a sense what you’re doing is creating a virtual location for your consumers to access in order to invoke the service when in fact you’re completely shielded from the actual end point of the service itself. So there is more disconnection between the consumer and the producer in that kind of virtual service end point. Again, I’ll tell you why you’d do something like that and what the pros and cons are on a future blog. The third type is virtual services themselves: services that don’t actually exist -- you construct them without actually implementing them in a development tool.   Again, this type is separate from the other two types I’ve just introduced and has its own reason(s) to exist.  This is a powerful concept for managing the design and change cycles of SOA that I'll discuss soon. So I hope you can get from this at least a basic idea of the three different kinds of virtualization you might be interested in with regards to SOA. Over the next couple of days we’ll take a look at what these are in a little more detail.



Our CMO on TV -- a SOAWorld Preview

Wed, 13 Jun 2007 14:54:00 +0000

SYS-CON TV producer Roger Strukhoff recently posted this PowerPanel roundtable on SOA Technology. iTKO CMO Jim Mackay was in Times Square to participate in the program - and the talk was a well-informed preview of the discussions we're likely to hear at SOAWorld 2007 in a couple weeks. Watch the video here: http://www.sys-con.tv/read/386861.htm Every vendor represented here emphasizes their own "world view" of SOA. Whether that means better interoperability, a robust platform, tighter security, or our favorite topic, SOA Quality and Validation. (Jim, I won't say "SOA Testing" here...) When it comes down to it, there was really a lot of common ground. Everyone on this panel seemed to agree that a driving goal of SOA Governance is achieving Trust. You can pick your technology approach, but in the end, SOA is a heterogeneous environment, so you need to establish that Trust - both across Federated business divisions and partnerships, and among the SOA technology vendors that establish the environment. If you'd like us to expand on the topics Jim and the panel brought up here in future posts, we'd love to hear your thoughts. - Jason



Practice Makes Less Perfect?

Wed, 09 May 2007 17:15:00 +0000

I’m a professional, likely you are too. We’d like to think we get better at our discipline over time. Gartner’s Allie Young is predicting that this will not be the case. Our practice, like many other professions, must adjust to the changing dynamics of our field of endeavor. For us, the increasing interconnectedness of SOA and SOA-like applications creates challenges that are not new to the readers of this blog but are in the market as a whole. Due to these new dynamics, Ms. Young’s predictions include the following: - Overall application downtime is expected to increase significantly, and that increase is expected to come from application errors (as opposed to infrastructure issues for example). - Through 2011, 75% of all IT organizations will have quality problems predominantly due to silo-based suboptimization. Here’s one time when I think it would be a good idea to let some professional pride get in the way of the status quo. My hope is that we can help so many customers achieve what Ms.Young called “holistic, integrated quality management” that the numbers actually improve over time. Call me a nut, but that’s why I spend so much of my time on the road with practitioners and in conduits like this blog. I am convinced, and we have more data show every day, that adopting a set of best practices around our SOA Quality Three C’s and taking a practical look at SOA Governance will allow you to become one of those who Ms. Young says will actually be distinguishing themselves as the top-tier with their process, quality, and service improvement capabilities. See, she left us an out. We can be the leaders and still keep her honest ;)