Subscribe: Sound advice - blog
http://soundadvice.id.au/blog/index.rss
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
client  contract  design  media types  protocol  rest  server  service  services  systems  types  uniform contract  uniform 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Sound advice - blog

Sound advice - blog



Tales from the homeworld



 



WS-REST 2012 Call for Papers

Tue, 06 Dec 2011 01:50:00 +1000

The Third International Workshop on RESTful Design (WS-REST 2012) aims to provide a forum for discussion and dissemination of research on the emerging resource-oriented style of Web service design. Background Over the past years, several discussions between advocates of the two major architectural styles for designing and implementing Web services (the RPC/ESB-oriented approach and the resource-oriented approach) have been mainly held outside of the traditional research and academic community. Mailing lists, forums and developer communities have seen long and fascinating debates around the assumptions, strengths, and weaknesses of these two approaches. The RESTful approach to Web services has also received a significant amount of attention from industry as indicated by the numerous technical books being published on the topic. This third edition of WS-REST, co-located with the WWW2012 conference, aims at providing an academic forum for discussing current emerging research topics centered around the application of REST, as well as advanced application scenarios for building large scale distributed systems. In addition to presentations on novel applications of RESTful Web services technologies, the workshop program will also include discussions on the limits of the applicability of the REST architectural style, as well as recent advances in research that aim at tackling new problems that may require to extend the basic REST architectural style. The organizers are seeking novel and original, high quality paper submissions on research contributions focusing on the following topics: Applications of the REST architectural style to novel domains Design Patterns and Anti-Patterns for RESTful services RESTful service composition Testing RESTful services (methods and frameworks) Inverted REST (REST for push events) Integration of Pub/Sub with REST Performance and QoS Evaluations of RESTful services REST compliant transaction models Mashups Frameworks and toolkits for RESTful service implementation Frameworks and toolkits for RESTful service consumption Modeling RESTful services Resource Design and Granularity Evolution of RESTful services Versioning and Extension of REST APIs HTTP extensions and replacements REST compliant protocols beyond HTTP Multi-Protocol REST (REST architectures across protocols) All workshop papers are peer-reviewed and accepted papers will be published as part of the ACM Digital Library. Two kinds of contributions are sought: short position papers (not to exceed 4 pages in ACM style format) describing particular challenges or experiences relevant to the scope of the workshop, and full research papers (not to exceed 8 pages in the ACM style format) describing novel solutions to relevant problems. Technology demonstrations are particularly welcome, and we encourage authors to focus on lessons learned rather than describing an implementation. Original papers, not undergoing review elsewhere, must be submitted electronically in PDF format. Templates are available here Easychair page: http://www.easychair.org/conferences/?conf=wsrest2012 Important Dates Abstract Submission: 3. February 2012 Paper Submission: 10. February 2012 Notification of Acceptance: 8. March 2012 WS-REST 2012 Workshop: 16. April 2012 Program Committee Chairs Cesare Pautasso, Faculty of Informatics, USI Lugano, Switzerland Erik Wilde, EMC, USA Rosa Alarcon, Computer Science Department, Pontificia Universidad de Chile, Chile Program Committee Jan Algermissen, Nord Software Consulting, Germany Subbu Allamaraju, Yahoo Inc., USA Mike Amudsen, USA Bill Burke, Red Hat, USA Benjamin Carlyle, Australia Stuart Charlton, Elastra, USA Duncan Cragg, Thoughtworks, UK Cornelia Davis, EMC, USA Joe Gregorio, Google, USA Michael Hausenblas, DERI, Ireland Rohit Khare, 4K Associates, USA Yves Lafon, W3C, USA Frank Leymann, University of Stuttgart, Germany Alexandros Marinos, Rulemotion, UK Ian Robinson,[...]



Best Practices for HTTP API evolvability

Tue, 06 Dec 2011 01:42:00 +1000

REST is the architectural style of the Web, and closely related to REST is the concept of a HTTP API. A HTTP API is a programmer-oriented interface to a specific service, and is known by other names such as a RESTful service contract, resource-oriented architecture, or a URI Space. I say closely related because most HTTP APIs do not comply with the uniform interface constraint in it's strictest sense, which would demand that the interface be "standard" - or in practice: Consistent enough between different services that clients and services can obtain significant network effects. I won't dwell on this! One thing we know is that these APIs will change, so what can we do at a technical level to deal with these changes as they occur? The Moving Parts The main moving parts of a HTTP API are The generic semantics of methods used in the API, including exceptional conditions and other metadata The generic semantics of media types used in the API, including any and all schema information The set of URIs that make up the API, including specific semantics each generic method and generic media types used in the API These parts move at different rates. The set of methods in use tend to change the least. The standard HTTP GET, PUT, DELETE, and POST are sufficient to perform most patterns of interactions that may be required between clients and servers. The set of media types and associated schema change at a faster rate. These are less likely to be completely standard, so will often include local jargon that changes at a relatively high rate. The fastest changing component of the API is detailed definition of what each method and media type combination will do when invoked on the various URLs that make up the service contract itself. Types of mismatches For any particular interaction between client and server, the following combinations are possible: The server and client are both built against a matching version of the API The server is built against a newer version of the API than the client is The client is built against a newer version of the API than the server is In the first case of a match between the client and server versions, then there is no compatibility issue to deal with. The second case is a backwards- compatibility issue, where the new server must continue to work with old clients, at least until all of the old clients that matter are upgraded or retired. Although the first two cases are the most common, the standard nature of methods and media types across multiple services means that the third combination is also possible. The client may be built against the latest version of the API, while an old service or an old server may end up processing the request. This is a forwards-compatibility issue, where the old server has to deal with a message that complies with a future version of the API. Method Evolution Adding Methods and Status The addition of a new method may be needed under the uniform interface constraint to support new types of client/server interactions within the architecture. For HTTP these will likely be any type of interaction that inherently breaks one or more other REST constraints, such as the stateless constraint. However, new methods may be introduced for other reasons such as to improve the efficiency of an interaction. Adding new methods does not impact backwards-compatibility, because old clients will not invoke the new method. It does impact forwards-compatibility because new clients will wish to invoke the new method on old servers. Additionally, changes to existing methods such as adding a new HTTP status code for a new exceptional condition can break backwards-compatibility by returning a message an old client does not understand. Best Practice 1: Services should return 501 Not Implemented if they do not recognise the method name in a request Best Practice 2: Clients that use a method that may not be understood by all services yet should handle 501 Not Implemented by choosing an alternative way of invoking the operation, or raising an except[...]



Systems Design - Requirements as Design

Sat, 24 Sep 2011 23:09:00 +1000

When designing a system as a systems engineer, we are not drawing diagrams for parts or identifying software classes to implement. We are typically operating at a level that includes at least a composition of software and hardware components. Requirements are the problem space for many engineers. From these requirements we synthesise a design that is appropriate for the engineering discipline we work within. For systems engineers involved in producing a design from requirements the solution space is often remarkably similar to the problem space: It is formalised as a set of requirements specifications. When beginning to develop a system we go through the following processes: Requirements Analysis, Logical Design, and Physical Design Requirements Analysis is the process of establishing a system requirements baseline. This process is worthy of its own discussion and can generally be separated from the system design processes. I include it for completeness because it is something that the systems engineering team for a particular system or subsystem will include in their list of responsibilities. After the team has established requirements baseline (sometimes known as the Functional Baseline) the real design processes begin. Different systems engineering standards and manuals will draw this differently. I'll lay out what I was taught, what I practice, and what I believe to be the most effective approach: Come up with a physical design of the system (this is referred to in the diagram above as synthesis) Come up with a logical design of the system (this is referred to in the diagram above as functional analysis) Perform trade studies, analyses, and other activities The nature of the "physical" design of the system depends on what system is being designed and at what level. The objective is to build up a model of "things", and the connections between these things. Generally we talk about the "things" of a system as being subsystems. Some examples: The subsystems of an aeroplane might include airframe, engines, fuel delivery, navigation, and so on and so forth The subsystems of an enterprise IT system might include various business applications, services, a virtualised networking and virtual machine hosting infrastructure, and again so on and so forth The identification of subsystems is important, and is not arbitrary. The normal kinds of engineering principles apply such as reuse, cohesion, coupling, information hiding, etc. The physical design usually cannot be derived purely from systems engineering practice or training. It is necessary to understand the type of system being built, and preferably to have some experience as to how the teams that will be responsible for delivering each subsystem are likely to relate to each other and how well the physical design facilitates those interactions. The connections between subsystems are also critical to identify at this juncture. Firstly, they can be an effective measure of how effective the physical design is minimising coupling and maximising cohesion. Secondly, like the subsystems themselves these interfaces are not going to come out fully formed based on the design of the systems team. They will need to be refined and designed by the teams responsible for each side of the interface. Identifying them now and determining what they need to achieve will ensure that the design of interfaces (like the design of the subsystems themselves) is linked to actual customer need to avoid both gold plating and underengineering outcomes. As well as connections between the subsystems, the system will have some interfaces that need to be allocated down to subsystems. These external system interfaces will each either need to be allocated down to a single subsystem to complete design of and implement or will need to be split into multiple interfaces and allocated down to multiple subsystems. Once the physical design of subsystems and subsystem interfaces has reached at least a first sketch stage, the logical design wi[...]



An Overview of Systems Engineering

Sat, 24 Sep 2011 21:18:00 +1000

Over the last few years I have made the transition from focusing on software architecture to . It's a field that incorporates a number of different roles, processes, and technologies. No, it's not systems administrator with "engineer" patched on the end. This field does not have much in the way of overlap with qualifications such as Microsoft Certified Systems Engineer. To avoid confusion I often talk about being an INCOSE-style systems engineer. The International Council on Systems Engineering is the peak body for this kind of work.

There are two basic ways to look at what a systems engineer does. One is is top down while the other is bottom up. The bottom up perspective is roughly

When we build complex systems we quickly reach a level where one small, well-disciplined team is not sufficient to deliver it. A systems engineering team is one that sits above several nuts and bolts delivery teams. Their job is to coordinate between the teams by:

  • Instructing the teams as to what they each individually will need to build
  • Taking input from the teams as to what is feasible, and adjusting the overarching design as needed to deliver the system as a whole
  • Taking product from the individual teams and assembling it into a cohesive, verified whole in line with the design and end user requirements.

The top down perspective is a little more like this

Customers need complex systems built that no one team can deliver. Someone needs to sit as the customer representative ensuring that a customer delivery focus exists at every level of the design. That means,

  • Having someone who can connect low level design decisions to real customer requirements and need
  • Being able to eliminate gold plating in excess of the user need
  • Ensuring that the product at the end really does meet the user need

Systems engineering works across all engineering disciplines to coordinate their activities and to align them to customer needs. It adds a technical chain of command to a large project alongside the project management chain of command that maximises efficiency and minimises risk. While the core focus of project management is on controlling scope and budget, the core focus of technical management is on controlling quality, value, and delivery efficiency. Together project and systems disciplines work to control project risk.

INCOSE defines Systems Engineering as:

an interdisciplinary approach and means to enable the realization of successful systems. It focuses on defining customer needs and required functionality early in the development cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the complete problem:

  • Operations
  • Performance
  • Test
  • Manufacturing
  • Cost & Schedule
  • Training & Support
  • Disposal

Systems Engineering integrates all the disciplines and specialty groups into a team effort forming a structured development process that proceeds from concept to production to operation. Systems Engineering considers both the business and the technical needs of all customers with the goal of providing a quality product that meets the user needs.

Systems engineering is a recursive approach to deliver large projects that meet stakeholder needs.

Benjamin




Scanning data with HTTP

Sun, 22 May 2011 18:28:00 +1000

As part of my series on SCADA and REST I'll talk in this article a little about doing telemetry with REST. There are a couple of approaches, and most SCADA protocols accidentally incorporate at least some elements of REST theory. I'll take a very web-focused approach and talk about how HTTP can be used directly for telemetry purposes. HTTP is by no means designed for telemetry. Compared to Modbus, DNP, or a variety of other contenders it is bandwidth-hungry and bloated. However, as we move towards higher available bandwidth with Ethernet communications and wider networks that already incorporate HTTP for various other purposes it becomes something of a contender. It exists. It works. It has seen off every other contender that has come its way. So, why reinvent the wheel? HTTP actually has some fairly specific benefits when it comes to SCADA and DCS. As I have already mentioned it works well with commodity network components due to its popularity on the Web and within more confined network environments. In addition to that it brings with it benefits that made it the world's favourite protocol: It is a mature standard with little interoperabilty risk It is text-based, easy to intercept, and easy for a human to debug. I can't tell you how useful that is on the Web, but in SCADA environments that might be deployed for upwards of 15 to 20 years it's a potential godsend. How many protocols around today would you bet money on will still be supported by tooling that far into the future? It is able to be passed through multiple layers of middleware such as routers, security devices, inspectors and loggers, policy enforcers, and pretty much anything else you can think of. Importantly, as with many SCADA protocols it follows a regular enough structure for these devices to make sense of the messages and do the right thing with them. The operation each request is trying to perform is always in the same place in the message. The subject of the operation is always available as the URL of the request. The data being transferred itself can be also be understood and modified as needed by the device. Not only that, but unlike most SCADA protocols every firewall on earth knows how to deal with it already. It's a doddle to be able to restrict the type of operation allowed through a particular access point or the specific addresses that should be allowed. HTTP is strightforward to secure in ways that are compatible with modern network architecture. So how do we bridge this gap between grabbing web pages from a server to grabbing analogue and digital values from a field device? Well, I'll walk down the naive path first. Naive HTTP-based telemetry The simplest way to use HTTP for telemetry is to: Identify each data item you want to scan from the device, and give each one a URL Have the master (aka the client) periodically request each data item Have the slave (aka the server) respond using a media type that both client and server understand So for example, if I want to scan the state of circuit breaker from a field device I might issue the following HTTP request: GET http://rtu20.prc/CB15 HTTP/1.1 Expect: text/plain The response could be: HTTP/1.1 200 OK Content-Type: text/plain CLOSED ... which in this case we would take to mean circuit breaker 15 closed. Now this is a solution that has required us to do a fair bit of configuration and modelling within the device itself, but that is often reasonable. An interaction that moves some of that configuration back into the master might be: GET http://rtu20.prc/0x13,2 HTTP/1.1 Expect: application/xsd-int The response could be: HTTP/1.1 200 OK Content-Type: application/xsd-int 2 This could mean, "read 2 bits from protocol address 13 hex" with a response of "bit 0 is low and bit 1 is high" resulting in the same closed status for the breaker. HTTP is not fussy about the exact format of URLs. Whatever appears in the path component is up to the server, and ends up ac[...]



The REST Constraints (A SCADA perspective)

Tue, 03 May 2011 00:53:00 +1000

REST is an architectural style that lays down a predefined set of design decisions to achieve desirable properties. Its most substantial application is on the Web, and is commonly confused with the architecture of the Web. The Web consists of browsers and other clients, web servers, proxies, caches, the Hypertext Transfer Protocol (HTTP), the Hypertext Markup Language (HTML), and a variety of other elements. REST is a foundational set of design decisions that co-evolved with Web architecture to both explain the Web's success and to guide its ongoing development. Many of the constraints of REST find parallels in the SCADA world. The formal constraints are: Client-Server The architecture consists of clients and servers that interact with each other via defined protocol mechanisms. Clients are generally anonymous and drive the communication, while servers have well-known addresses and process each request in an agreed fashion. This constraint is pretty ubiquitous in modern computing and is in no way specific to REST. In service-orientation the terms client and server are usually replaced with "service consumer" and "service provider". In SCADA we often use terms such as "master" and "slave". The client-server constraint allows clients and servers to be upgraded independently over time so long as the contract remains the same, and limits coupling between client and server to the information present in the agreed message exchanges. Stateless Servers are stateless between requests. This means that when a client makes a request the server side is allowed to keep track of that client until it is ready to return a response. Once the response has been returned the server must be allowed to forget about the client. The point of this constraint is to scale well up to the size of the world wide web, and to improve overall reliability. Scalability is improved because servers only need to keep track of clients they are currently handling requests for, and once they have returned the most recent response they are clean and ready to take on another request from any client. Reliability is improved because the server side only has to be available when requests are being made, and does not need to ensure continuity of client state information from one request to another across restart or failover of the server. Stateless is a key REST constraint, but is one that needs to be considered carefully before applying it to any given architecture. In terms of data acquisition it means that every interaction has to be a polling request/response message exchange as we would see in conventional Modbus telemetry. There would be no means to provide unsolicited notifications of change between periodic scans. The benefits of stateless on the Web are also more limited within an industrial control system environment, where are more likely to see one concurrent client for a PLC or RTU's information rather than the millions we might expect on the Web. In these settings stateless is often applied in practice for reasons of reliability and scalability. It is much easier to implement stateless communications within an remote terminal unit than it is to support complex stateful interactions. Cache The cache constraint is designed to counter some of the negative impact that comes about through the stateless constraint. It requires that the protocol between client and server contain explicit cacheability information either in the protocol definition or within the request/response messages themselves. It means that multiple clients or the same polling client can reuse a previous response generated by the server under some circumstances. The importance of the cache constraint depends on the adherence of an architecture to the stateless constraint. If clients are being explicitly notified about changes to the status of field equipment then there is little need for caching. The clients will simply accept the upda[...]



The REST Uniform Contract

Sat, 23 Apr 2011 16:10:00 +1000

One of the key design decisions of REST is the use of a uniform contract. Coming from a SCADA background it is hard to imagine a world without a uniform contract. A uniform contract a common protocol for accessing a variety of devices, software services, or other places where I/O, logic, or data storage happens. The whole point of SCADA is acquiring data from diverse sources and sending commands and information to the same without having to build custom protocol converters for each individual one. Surprisingly, this is a blind spot to most software engineering. It's a maturity hole that normally requires every service consumer to implement specific code to talk to each service in the architecture. SOAP and WSDL are built on this style of software architecture, where every service in the system has a unique protocol to access the capabilities of the service. There is no common protocol mechanism to go and fetch information. There is no common mechanism to store information. What commonality exists between the protocols of different services exists at a lower level. Services define a variety of read and write operations. SOAP ensures these custom operation names are encoded into XML in a consistent way that can be encapsulated for transport across a variety of network architectures, and WSDL ensures there is a common way for the service to embed this protocol information into the integrated development environments for service consumers, as well as into the service consumers themselves. The contract mechanism simplifies the task of processing messages sent between service and consumer, but still couples service and consumer together at the network level and at the software code level so that each consumer can only work with the one service that implements the contract. OPC-UA and OPC are built on SOAP and COM, respectively. SOAP and COM both share this low level of protocol abstraction and both OPC and OPC-UA compensate for this by defining a service contract that not only one service implements but that every OPC DA Server or related server needs to implement in order for consumers to be able to communicate with them without custom per-service message processing logic and without custom per-service protocol. For this reason they are a good case study to contrast the features of SOAP and HTTP for industrial control purposes. HTTP is the current standard protocol for one aspect of the REST uniform contract. In fact, there are two other key aspects. A REST uniform contract is a triangle of: A standard syntax for resource identifiers A standard protocol ("methods") for accessing resources A standard set of types ("media types") that can be transferred As all SCADA systems use some form of uniform contract, it is useful to understand the key design feature of a REST uniform contract compared to a conventional SCADA contract. In a conventional bandwidth-conservative SCADA protocol it is common to define fetch, store, and control operations that are each able to handle a defined set of types. These types might include a range of integer and floating point values, bit-fields, and other types. As I look back over the protocols I have used over my career I consider that some of the protocol churn we have seen over time has been because of the range of types available. Each time we need a new type we either have to change the protocol, start using a different protocol, or start to tunnel binary data through our existing protocol in ways that are custom or special to the particular interaction needed. REST takes a different approach where the protocol "methods" are decoupled from the set of media types. This adds a little protocol overhead where we need to insert an identifier for the media type along with every message we send, and a long one at that. Examples of media type identifiers on the Web include text/plain, text/html, image/jpeg, image/svg+xml[...]



Industrial REST

Wed, 13 Apr 2011 22:03:00 +1000

REST is the foundation of the Web, and is increasingly relevant to enterprise settings. I hail from a somewhat different context of industrial control systems. I have been thinking of putting together a series of articles about REST within this kind of setting to share a bit of experience and to contrast various approaches.

is an architectural style that lays down a set of design constraints that are intended to create various desirable properties in an architecture. It is geared towards the architecture of the Web, but has many other applications. REST makes and excellent starting point for the development of .

SCADA systems are usually built around SCADA protocols such as , , or . Exactly what protocol is used will depend on a variety of factors such as the particular control industry we happen to be working in, preferences of particular customers, and the existing installed base.

The SCADA protocol plays the same role in a SCADA system as HTTP plays on the Web. It is pitched at about the same level, and has many similar properties. If we are to reimagine the SCADA system as a REST-compliant architecture then the SCADA protocol would be the application protocol we would have in use.

SCADA protocols have been developed over a long period of time to be typically very bandwidth-efficient and to solve specific problems well. However, we have been seeing for a long time now across our industries the transition from slow serial connections to faster ethernet. We have been seeing the transition from modem communication to broadband between distant sites. Many of the benefits of existing protocols are being eaten away as they are shoehorned into internet-based environments and are needing to respond to new security challenges and the existence of more complex intermediary components such as firewalls and proxies. We see protocols such as OPC responding by adopting SOAP over HTTP as a foundation layer and then implementing a new SCADA protocol on top of this more complex stack.

I would like to make the case for a greater understanding of REST in the industrial communications world, a new vision of how industrial communications interacts with intranet environments, and to identify some of the areas where HTTP as the main REST protocol of today is not quite up to snuff for the needs of a modern control systems world.

Benjamin




Jargon in REST

Wed, 09 Feb 2011 11:43:00 +1000

Merriam-Webster defines jargon as the technical terminology or characteristic idiom of a special activity or group. In REST or REST-style SOA there are really two different levels where jargon appears: Jargon (within a service inventory) Methods, patterns of client/server interaction, media types, and elements thereof that are only used by a small number of services and consumers. Jargon (between service inventories) Methods, patterns of client/server interaction, media types, and elements thereof that are only used by a small number of service inventories. Jargon has both positive and negative connotations. By speaking jargon between a service and its consumers the service is able to offer specific information semantics that may be needed in particular contexts. The service may be able to offer more efficient or otherwise more effective interactions between itself and its consumers. These are positive features. In contrast there is the downside of jargon: It is no longer possible to reuse or dynamically recompose service consumers with other services over time. More development effort is required to deal with custom interactions and media types. Return on investment is reduced, and the cost of keeping the lights on is increased. Agility is one property that can both be increased and reduced through use of jargon. An agile project can quickly come along and build the features they need without propagating these new features to the whole service inventory. In the short term this increases agility. However, the failure to reuse more general vocabulary between services and consumers means that generic logic that would normally be available to support communication between services and consumers is necessariy missing. Over the long term this reduces the agility of the business in delivering new functionality. The REST uniform interface constraint is a specific guard against jargon. It sets the benchmark high: All services must express their unique capabilities in terms of a uniform contract composed of methods, media types, and resource identifier syntax. Service contracts in REST are transformed into tuples of (resource identifier template, method, media types, and supporting documentation). Service consumers take specific steps to decouple themselves from knowledge of the individual service contracts and instead increase their coupling on the uniform contract instead. However, a uniform contract that contains significant amounts of jargon defeats the uniform interface constraint. At one level we could suggest that the world should look just like the HTML web, where everyone uses the same media types with the same low-level semantics of "something that can be rendered for a human to understand". I would suggest that a business IT environment demands a somewhat more subtle interpretation than that. That the set of methods and interactions used in a service inventory should be standard and widely used across that service inventory is relatively easy to argue. Each such interaction describes a way of moving information around in the inventory, and there are really not that many ways that information needs to be able to move from one place to another. Once you have covered fetch, store, and destroy you can combine these interactions with the business context embodied in a given URL to communicate most information that you would want to communicate. The set of media types adds more of a challenge, especially in a highly automated environment. It is important for all services to exchange information in a format that preserves sufficiently-precise machine-readable semantics for its recipients to use without guessing. There are far more necessary kinds of information in the world then there are necessary ways of moving information around, so we are always going to see a need for more media types t[...]



B2B Applications for REST's Uniform Contract constraint

Wed, 12 Jan 2011 01:21:00 +1000

REST's uniform interface constraint (or uniform contract constraint) requires service capabilities be expressed in a way that is "standard" or consistent across a given context such as a service inventory. Instead of defining a service contract in terms of special purpose methods and parameter lists only understood by that particular service, we want to build up a service contract that leverages methods and media types that are abstracted away from any specific business context. REST-compliant service contracts are defined as collections of lightweight unique "resource" endpoints that express the service's unique capabilities through these uniform methods and media types. To take a very simple example, consider how many places in your service inventory demand that a service consumer fetch or store a simple type such as an integer. Of course the business context of that interaction is critical to understanding what the request is about, but there is a portion of this interaction that can be abstracted away from a specific business context in order to loosen coupling and increase reuse. Let's say that we had a technical contract that didn't specifically say "read the value of the temperature sensor in server room A", or "getServerRoomATemperature: Temperature" but instead was more specific to the type of interaction being performed and the kind of data being exchanged. Say: "read a temperature sensor value" or "GET: Temperature". What this would allow us to do is to have a collection of lightweight sensor services that we could read temperature from using the same uniform contrct. The specific service we decided to send our requests to would provide the business context to determine exactly which sensor we intended to read from. Moreover, new sensors could be added over time and old ones retired without changing the uniform interface. After all, that particular business context has been abstracted out of the uniform contract. This is very much how the REST uniform contract constraint works both in theory and in practice. We end up with a uniform contract composed of three individual elements: The syntax for "resource" or lightweight service endpoint identifiers, the set of methods or types of common interactions between services and their consumers, and the set of media types or schemas that are common types or information sets that are exchanged between services and their consumers. By building up a uniform contract that focuses on the what of the interaction, free from the business context "why" we are free to reuse the interface in multiple different business contexts. This in turn allows us to reuse service consumers and middleware just as effectively as we reuse services, and to compose and recompose service compositions at runtime without modification to message processing logic and without the need for adaptor logic. On the web we see the uniform contract constraint working clearly with various debugging and mashup tools, as well as in the browser itself. A browser is able to navigate from service to service during the course of a single user session, is able to discover and exploit these services at runtime, and is able to dynamically build and rebuild different service compositions as its user sees fit. The browser does not have to be rebuilt or redeployed when new services come along. The uniform interface's focus on what interaction needs to occur and on what kind of information needs to be transferred ensures that the services the browser visits along the way are able to be interacted with correctly with the individual URLs providing all of the business context required by the browser and service alike. When we talk about service-orientation and service-orientated architecture, we move into a world with a different set of optimisations than that of the [...]