Subscribe: Health Perspectives
Added By: Feedage Forager Feedage Grade B rated
Language: English
codes  data  failure  health  healthcare  heart failure  heart  information  new  nhs  patient  record  records  semantic interoperability 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Health Perspectives

Health Perspectives

notes on health informatics

Updated: 2018-03-05T15:48:53.534Z


Archie Cochrane on epidemiologists and NHS merit awards


From One Man's Medicine:

Beyond all this, I think that I also failed epidemiology in another way because I did not make enough fuss about a problem I saw looming ahead. It dated back to Nye Bevan’s bribe to get consultant physicians and surgeons into the National Health Service. He promised the senior physicians and surgeons that they would get merit awards which would increase their salaries well above that of the average consultant in private practice. The top merit award could double the salary. There were, however, at that time no epidemiologists in Harley Street, so when the scheme was inaugurated, epidemiologists were not included.

The Pneumoconiosis Research Unit had a ward with patients and some of the clinical physiologists were in charge of it. I knew these people would in time get merit awards (the whole system is very secret), while there was no chance of an epidemiologist getting one. I never complained on my own account as I have a private income and I thought the MRC paid me adequately, but I became increasingly worried about recruitment of epidemiologists in the future. There was an increasing interest in epidemiology amongst bright young men and women in the late 1950s and several came to see me. I felt it my duty to point out to them that though epidemiology would give them a fascinating and satisfactory life their top salary level would always be much less than that of their clinical colleagues. This problem became acute in the 1960s and almost killed the promising resurgence of British epidemiology which had started in the 1950s. I certainly raised the problem, but I did not shout loud enough. I do not blame Nye Bevan; his bribe was well worth while to help get the National Health Service started.

A day in hospital


We spent 24 hours in Royal United Hospital Bath over the weekend, dealing with a broken arm. Here's what we learnt:

  1. The NHS is an amazing idea. To be treasured. I explained to my son that if we'd have been in the US we might have had to pay thousands dollars for the investigations, overnight stay and surgical care he'd received, even if we'd been able to claim it back. And that if we weren't insured or well- off who knows what sort of care we'd have received. A pox on the Tory modernisers who don't understand the law of unintended consequences, and a double pox on those who want to privatise the NHS. And their cheerleaders.
  2. Royal United Hospital Bath is a very good hospital. On a Sunday. At Easter. Checked-in to A&E, dosed up on pain killers, x-rayed, first plaster cast in an hour. Nurse led, doctors at hand. Lots of checklists. Thank you Nurse Conway. I challenge anyone, anywhere, under any payment system, to get a better service.
  3. Information systems in NHS hospital are curious. The front desk found our home address in another part of the country. But that's all - no access to any existing patient record.  Apart from the x-ray record everything was paper based. 
  4. It was interesting to see the nurse fish out her iPhone (thank you Kirsty!) when she needed to do a calculation. 
  5. Most remarkable was the way of moving the patient record created in Bath to Hereford. This is what happened. Most of the notes that had accumulated over the last 24 hours were photocopied (10 pages) and put in an envelope. The first set of x-rays were burnt onto a CD and put in the envelope. But the second set of images, taken in theatre, weren't available so we presume they won't be going to Hereford. A handwritten discharge note was added. The package was given to us to convey to the fracture clinic at Hereford where his care will continue. Does this constitute a Patient Held Record? What about PACS? Couldn't we do with something like a NHS wide IT programme? :-) 

Parliament Hill Fields


Mike Taylor recently questioned the practice of paywalling and profiteering from selling the results of publicly funded research back to taxpayers in a Guardian comment to the surge of criticism of Elsevier is that they take a free good and charge for it. What has figured less prominently is that Elsevier sometimes charges twice or even three times for the same free good.The practice is known as double dipping. Publishers at least recognise the phrase and are a pains to deny that they do it. Unfortunately the paywall business model makes it almost certain that they do.Let's take the NHS. Elsevier et al double dip the NHS by selling subscriptions to the same journals at all levels. Where they can they sell subscriptions to electronic versions of journals to the NHS as a whole. They also sell electronic and paper subscriptions to departments, individual libraries across the NHS. They also sell subscriptions to universities, to which many NHS staff are associated. As a result it is quite feasible for an academic clinician to have had a subscription to a journal paid three or four times over.What makes Elsevier a stand out though is triple-dipping. In addition to multiple charging the NHS for subscriptions they require the NHS to pay $3-$4,000 dollars per article to enable the NIHR to implement its policy of open access to the findings of NIHR funded research - research that has usually been conducted on volunteer NHS patients, in taxpayer funded facilities, with grants and other funding from NHS budgets.Elsevier argue that the NHS is paying for different things - and they deny that paying for subscriptions and paying to archive is double-dipping. Yet Elsevier's costs only occur once and ultimately all the different payment streams are off-setting that one parcel of costs. So double, triple or even higher orders of dipping is the almost inevitable consequence of their approach. [...]



No inherent objections to the use of health records for medical research. There are ways to anonymise, protect AND link records, as the SAIL project has shown. But a few observations come to mind at this point:

  1. Research on large sets of anonymised NHS records has been taking place in the UK at least since the 1960s (Lester Gill PDF) 
  2. The critical question in relation to medical records is who owns them. This development answers the question in a round about way - the government has primary ownership, not clinicians or patients. 
  3. What research will be done? Research using large datasets has been talked about for many years, and has been conducted, especially in the US. Datasets are much easier to assemble than prospective sets of consented patients. But the nature of the dataset methodology makes it unclear that it produces genuinely new knowledge.
  4. Pharmaceutical companies have worked with the NHS on anonymised datasets under the heading of practice audit. The general aim of these studies has been to increase prescribing rates of existing drugs.
  5. The key area of development is personalised medicine. But this requires the collection of genomic information - a complex process requiring explicit patient consent being explored in the Stratified Medicine Programme
  6. The political aspects of the decision being made here has some similarities to the previous Government's National Programme for IT.

Types of cloud - app internet versus drive-in-the-sky internet


In February 1996 Wired magazine published an interview with Steve Jobs in which he predicted an Internet of digital agents rather than websites ( The ideas in that interview pre-dated Tim Berners-Lee and colleagues better known semantic web paper by 5 years. (

An internet of digital agents didn't look likely at the time. But 15 years on it is beginning to take shape as one vision of the computing cloud. Apple's iCloud shifts the centre of personal computing from the device (PC, Mac, smartphone, tablet, whatever) to the cloud. As Steve Jobs said "We're demoting the PC to being just another device".

There is another vision of the cloud, as a hard drive in the sky (dropbox for example) which lets you store files remotely. It's a great model. But it might not be around for too long. The iCloud model does away with file-syncing altogether. If anything it is data-syncing. And it is transparent - you don't have to do anything to have a photo on one device appear on another.

The first implementation of the cloud was file-oriented. The second implementation, which will benefit Apple at Microsoft's expense, is data oriented. And a cloud of data may be the beginning of a new app-oriented, intelligent-agent Web.

And what the Wired article shows most is the benefit of having a good clear vision and sticking to it. Apple has done that, but Microsoft hasn't. The huge cash mountain at Microsoft has disguised an underlying lack of vision for a long time; even (maybe especially) the purchase of Skype is a sign of playing catch-up

'Good' organizational reasons for 'bad' clinical records


As the inverted commas imply, good and bad are the words analysed in this foundational text for health informatics by Howard Garfinkel. Unfortunately it isn't available online. At the risk of censure from fair dealing zealots here is a central part of the paper:
When any case folder [medical record] was read as an actuarial record its contents fell so far short of accuracy as to leave us puzzled as to why "poor records" as poor as these should nevertheless be so assiduously kept. On the other hand, when folder documents were regarded as unformulated terms of a potential therapeutic contract, ie as documents assembled in the folder in open anticipation of some occasion when the terms of a therapeutic contract might have to be formulated from them, the assiduousness with which folders were kept, even though their contents were extremely uneven in quantity and quality, began to "make sense".



Leopold Bloom: "What is weight really when you say the weight? Thirtytwo feet per second per second. Law of falling bodies: per second per second. They all fall to the ground. The earth. It's the force of gravity of the earth is the weight."

Frank Wilczek: "Matter is not what it appears to be. It's most obvious property - variously called resistance to motion, inertia, or mass - can be understood more deeply in completely different terms. The mass of ordinary matter is the embodied energy of more basic building blocks, themselves lacking mass"

Bloom, and his creator, would have liked that, especially the phrase "themselves lacking mass"

Is ownership important?


There was a debate on LinkedIn recently about personal health records, and about what 'patient control' meant. A fairly prevalent view was that the question of who 'owned' the record was irrelevant.Yet it seems to me that ownership is easily the most important aspect of a health record.

I realise I'm in a minority on this issue. I was therefore interested to read yesterday from Sir Ian Kennedy (
I begin with the culture of professionals.The professional owns knowledge. From this, as I suggested in the Reith Lectures, flows power.And, power here is used in a neutral, descriptive sense. Speaking in general terms, and conscious that, being general, we all know of exceptions, the patient is someone in the professional’s power whom the professional seeks to care for.The professional grants the patient the benefit of his/her knowledge, but does so on terms.The fundamental term is that the professional remains in charge of the exchange. Yes, the patient may be granted the autonomy to make a choice, but it is an autonomy granted by the professional, rather than owned by the patient.
The striking feature of this analysis is that the relationship between patient and professional is not perceived as one of service.The professional is simply not at the patient’s service.Thus, analogies with relationships more obviously characterised by the notion of service are out of place. (And this is equally true of the two other ancient professions, the law and the church.) Introducing the rhetoric of the market, in the form of there being a commercial relationship between the patient and the professional, does not change this dynamic.The fact that the patient is paying the bill has to be managed by the professional. Manners may change a little, but little else. Power is not relocated. Indeed, the very word “service” is too close to “servile” to many a professional’s ear.To repeat, the professional in healthcare is not like, does not see her/himself as a on a par with, a restauranteur, an electrician, a hotelier, or a shopkeeper.To all of these “the customer is always right”.To the professional in healthcare, the word customer sticks in the throat, given its symbolic quality of being the
more important player in the exchange, and is not right.The patient is there to be guided, advised, led, but not in charge.And, none of this analysis is intended to deny for a moment that the vast majority of professionals are immensely caring and supportive people, committed to doing their best for their patients. What I am seeking to understand is some kind of cultural characteristics which constitute the DNA of professionals. 
This consideration does not suit existing suppliers, who are struggling to get clinicians interested in electronic records. But in the longer term a more patient-owned approach to information will be part of new forms of professionalism.

VistA going Open Source


Quite an important moment, though it should have happened years ago. From the RFC/RFP:

Specifically, VA believes that a structured, deliberate, and predictable migration from our custom and proprietary EHR software to an openly architected, modular, and standards-based platform will achieve five crucial objectives:

  • It will unleash EHR innovation inside and outside VA. 
  • It will release VA’s captive dependency on any particular component or service and give our clinicians access to the best available tools and solutions. 
  • It will reduce the costs and risks of reliable implementation (and integration) of new functional modules that improve VistA’s capabilities. 
  • It will measurably improve health outcomes for our nation’s Veterans. 
  • It will enable other providers in the public and private health care system to benefit from, contribute to, and interoperate with this national asset.

Semantics and semantic interoperability of health records


I share Robert Dolin and Liora Alschuler's view that evaluation and measurement in health informatics is vital, and that frameworks for evaluation should be informed by examining the underlying concepts to be measured, in this case semantic interoperability. (JAMIA 2011;18:99-103 )

Their analysis raise a few questions however.

Firstly, the most basic type of semantic interoperability is that which enables one computer to correctly interpret a character string sent from another computer. "John Smith" for example, interpreted as the name
of a relative rather than the patient himself, or 02-19-1959 as a date of birth rather than a date of death. This is a modest but valuable type of semantic interoperability. It has been available for many decades in computer science, for example through semantic modelling in relational database management systems. As Dolin and Alschuler note, health  informatics has yet to provide a truly reliable method for the semantic interoperability of structured data. They cite the example of the  differential diagnosis of pneumonia. In fact, as William Hogan has shown,  HL7 can struggle to reliably model even basic information such as marital status ( ) and gender ( ).  Ambiguity and complexity are hard-wired into the construction of HL7;  these are not desirable features for a system intended to exchange meaningful information transparently.

Secondly, the meaning of a piece of data might not be self-contained  but may depend on other data items. For example, a data point ("prescribed penicillin") could be interpreted as vital, or unnecessary, or harmful, depending on other data about the patient. Measures of semantic interoperability need to take into account the potential interdependency  of data items in a record. Dolin and Alschuler discuss context sensitivity, but as far as I am aware there appears to be no recognition  in any current HIT standards that the meaning of a piece of information  may depend on the meaning of other pieces of information.

Thirdly, it is not clear how much of the meaning in a record is  locked in unstructured data. This matters because even if all the  structured data in a record could be exchanged accurately, a portion of  the meaning of a record cannot be exchanged, at least not in machine  oriented form. It could be argued that it is becoming possible to parse  and auto code free text. But this is technically problematic. Doing so also highlights the largest challenge of all, which is that codification  is always a translation, and translation risks losing, changing or adding  to the semantics in the original text of a medical record.

In conclusion, Dolin and Alschuler's paper is very welcome, but it  does not sufficiently challenge the assumptions about meaning made by messaging and coding standards for EMRs. These need to be examined, and a  more sophisticated model of meaning within the EMR developed, before a  framework for evaluation semantic interoperability between EMRs can be established.

Copyright and innovation


A couple of things struck me when reading the submission to the the Hargreaves review from the Copyright Licensing Agency, prepared for them by Price Waterhouse Cooper

For a report which its commissioners could only want support for its position, the message is surprisingly lukewarm. Take Figure 9 for example, which compares global competitiveness with strength of IP protection. Both measures are problematic, but presumably PWC were charged with finding the most supportive evidence, so this must be it.

The authors optimistically say that "there is a positive correlation between perception of the IP framework and overall competitiveness". Actually, there isn't. Had PWC undertaken an elementary statistical test (Spearmans Rank), they would have found the degree of correlation to be 0.395, which is statistically non-significant for 18 data pairs. A more accurate summary would be a statement that there is no correlation between strength of IP protection and a country's competitiveness. 

A second oddity is the transaction chain for higher education licensing with CMOs. Not only does it focus on only part of the transaction chain, leaving out the part in which higher education subsidises the chain, it assumes that academics are busy copying sections of books by commercial authors, rather than copying papers in academic journals. And not surprisingly at all, the analysis ignores the impact of Open Access material on transaction costs.

Your text is out there somewhere (but probably not in an IR)


Needing a copy of this, and not wanting to pay $40 for something I've already paid for as a tax payer, I found this Institutional Repository record which needless to say didn't link to the full text. Eventually I found a copy on the University of Plymouth extranet.

 University Institutional Repositories continue to under perform, often seeming like glorified publication lists.






Published in December 2010 by a Whitehouse scientific advisory group which includes Craig Mundie and Eric Schmidt, available at . Main recommendation
It is crucial that the Federal Government facilitate the nationwide adoption of a universal exchange language for healthcare information and a digital infrastructure for locating patient records while strictly ensuring patient privacy. More specifically, PCAST recommends that the Office of the National Coordinator for Health Information Technology and the Centers for Medicare and Medicaid Services develop guidelines to spur adoption of such a language and to facilitate a transition from traditional electronic health records to the use of healthcare data tagged with privacy and security specifications.
Summary of report:

  1. Great potential for health IT but held back because 80% of US docs have no EMR system.
  2. Even where EMRs do exist they are rudimentary and lack interoperability.
  3. Areas where IT has had a transformational effect have employed simple standards and developed new products to knit together fragmented systems into a unified infrastructure. [Examples? Google, which knits the web into a unified search engine. Not iTunes though, which is anything but disparate]
  4. Health IT is a long way off this goal. Current EMRs are highly proprietary; inward looking; generate legitimate privacy concerns; and are oriented towards administration.
  5. Meaningful use rules have been helpful but establishing a framework for information exchange is vital to accelerate progress. 
  6. Standardized records and SOA aren't the solution
  7. Health IT should use the disruptive effects of the Internet to move forward, and not wait for EMRs to evolve.
  8. A universal exchange language and the associated infrastructure are priorities. Both need to be funded centrally.
  9. Tagged data elements, accompanied by meta-data, are key
  10. The indexing and retrieval of metadata-tagged data, across large numbers of geographically diverse locations, is an established, highly developed, technology—the basis of web search engines, for example.
  11. The approach that we describe requires that there be a common infrastructure for locating and assembling individual elements of a patient’s records, via secure “data element access services” (DEAS). Importantly, this approach does not require any national database of healthcare records; the records themselves can remain in their original locations. Distinct DEAS could be operated by care delivery networks, by states or voluntary grouping of states, with possibly a national DEAS for use by Medicare providers. All DEAS will be interoperable and intercommunicating, so that a single authorized query can locate a patient’s records, across multiple DEAS.
Some other points in the review:
  • More than 3 million Kaiser patients are registered to access their health record online. Over 100,000 access the system on a given day, reducing office visits by 26%
  • Google and Microsoft, the two largest vendors of web-based PHRs, recently agreed on mechanisms to enable the free exchange of information between their respective PHR systems, and others may follow.
  • US physicians who adopt electronic records by 2014 can qualify for Medicare bonus payments of up to $44,000

The case for terminologies in healthcare


One of the foundational axioms in health informatics is the need for codes, more formally clinical terminologies, to represent conditions, diseases and symptoms. For example, this paper, or this statement from IHTSDO:The IHTSDO seeks to improve the health of humankind by fostering the development and use of suitable clinical terminologies, notably SNOMED CT, in order to support safe accurate and effective exchange of clinical and related health information. The focus is on enabling the implementation of semantically accurate health records that are interoperable.I want to show that coding creates the illusion of semantic interoperability, and that far from improving, it degrades the quality of semantic exchange.What do codes look like? Heart Failure for example, can be coded as D006333. It can also be coded as 150 or G58 (Read Code v2, widely used in primary care in the NHS). It can be coded 10019280 in the Medra Ontology.You might ask "Why so many codes for the same thing?", and a plausible answer might be "it doesn't really matter, because the codes all point to the same concept, and this way everyone can use the coding system they are familiar with, and share information when they need to".And this would be true if heart failure was a single entity. But heart failure is a syndrome. It embodies a collection of concepts - several causes, a range of underlying pathology and a variety of symptoms. It is not a single thing, and the coding schemes reflect this.Even a relatively simple scheme such as ICD 10 includes a range of possible codes to cover different types and causes of Heart Failure, including I11.0 for heart failure caused by hypertension, 150.0 for congestive heart failure, and 150.1 for left ventricular failure. Do both 150.0 and 150.1 map onto D006333? Or do they map to sub-codes of D06333? Read Codes contain separate codes for acute and chronic congestive heart failure. How do these map to 150.0? Several coding schemes have separate codes for the grades of hypertension as defined by the New York Heart Association scale for grading the severity of Heart Failure. Read Codes 662f - 662i cover these, but they are not represented in all coding schemes (Medra lacks codes for NYHA grades for example). It is recognised that some of the distinctions encapsulated in codes - for example between systolic and diastolic heart failure - are "somewhat arbitrary" (see this ref)In principle it would be possible to map between each of the codes for heart failure in the major coding schemes. But even if it were done the mapping across schemes would not be 1:1, resulting in a loss of information and the inclusion of code generated uncertainty. As a result, machine based translation would only be reliable if the terms in each coding scheme were kept high level. But that would mean that the high level code for Heart Failure in one scheme would be translated to the high level code for Heart Failure in another scheme. It would be simpler to use the term 'Heart Failure' in this case.There are two more significant issues with codes. First issue - was the original diagnosis correct? Setting a diagnosis in a code removes the ambiguity and uncertainty that often surrounds diagnoses, which then travels unencumbered across systems. Second issue - has the diagnosis been coded correctly? Research shows a good deal of variation within and between clinics in the way diagnoses are made and coding is applied. In part this is because of operator error. But also it arises from the often multiple ways of coding a diagnosis. Choice of codes is always a local decision, based on rules which do not travel with the code.Clinicians are aware of the problems of[...]

What is value in healthcare?


Professor Michael Porter has written passionately about the need to reconsider what constitutes value in healthcare for many years. His latest article in the New England Journal of Medicine summarises his views and highlights the health informatics issues associated with not focusing on value.

What is surprising is that his perspective should still be seen as radical and challenging in an open market such as the US. In other areas of life the idea that "value should always be defined around the customer" would hardly deserve mention, yet in healthcare it is radical and challenging.

The idea works well in the UK. But there is one thing to add. That in the UK system some of the value of health expenditure is manifest at the population level, and there is still (just) some value in the notion of social solidarity which the NHS represents - some of the value I gain from NHS expenditure is knowing that people I don't know may gain access to healthcare.  

Cloud watching


There is still a chance for Microsoft to get ahead in the cloud. But with each passing day that they don't it gets less likely that they will. Just this week, more innovation, all driven by cloud computing - Chrome Webstore, Salesforce database, Google Cloud Print, new Amazon Web Services. This level of innovation is exceptional and it makes it very hard for Microsoft to get ahead.

Sobering realities about Health Information Technology


Just noting this important paper in JAMIA (J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637) . The paper identifies critical issues for Health Information Technology (HIT) in all healthcare systemsHIT has been slow to embrace the concepts and practices of safety critical computing. There doesn't need to be FDA level of scrutiny, but there has to be recognition that HIT is analogous in some ways to a new drug or device. I would add that the current approach to certifying the fitness for use of HIT in the NHS appears to be User Acceptance Testing, which is wholly inadequate. The current consultation document Information Revolution says nothing about safety critical computing. The final version could of course, but it would require a more centrally directive approach than seems to be envisaged at the moment, a consequence of NPfITs failure.The logic that since 'to err is human', computerisation will reduce error is too simplistic. Errors are generally produced by the interactions between multiple systems/people. So it's wrong to assume that computerising manual processes necessarily reduces error. Computerisation may introduce error.Measurements of usage are not in themselves an adequate measure of the success of an implementation. I think a lot of people would settle for usage as a measure of success. The point made here is that in a complex system any single measure doesn't tell the whole story.The messy desk fallacy - it is tempting to think of clinical processes as somewhat chaotic and therefore ripe for rationalisation. Some truth in this of course, but only some - healthcare is complex and non-linear. I would add that some of key techniques of computerisation, such as Business Process Modelling, look pretty inadequate in relation to complex systems.HIT is too focused on delivering benefits of an administrative nature, and not enough on delivering direct benefits to clinicians and patients. With a few exceptions, this appears to be true. Think for example about the huge efforts that go into standardisation and terminologies, which are largely irrelevant to clinicians and patients, compared to functionality and user experience.  The basis for this error is the belief that a central aim of healthcare is the efficient production of good records. That it is not was demonstrated by Garfinkel in the 1950s. But major areas of HIT activity, for example HL7, appear to have overlooked this.The field of dreams fallacy - if we provide  new IT systems they will attract users'Sit-stay'. The fallacy that computers are in a smarter than humans. A very deep seated error, almost one of the founding stones of HIT - medicine is a cognitive/information processing profession (make a diagnosis then apply the right treatment), so lets see if computers can do better. The NHS PRODIGY programme was an example of this erroneous way of thinking.HIT is often designed as if a single clinician were dealing with a case, or at most a team of clinicians sharing a common perspective on a case. Again, this may be too simplistic."Paper forms are not simple data repositories that, once computerized, could be eliminated. Rather such ‘scraps’ of paper are sophisticated cognitive artifacts that support memory, forecasting and planning, communication, coordination, and education""Teams of well-intentioned clinicians and software engineers may believe that understanding of clinical processes coupled with clever programming can solve the challenges facing healthcare. But such teams typically will not have the requisite breadth and dep[...]

Does the NHS have an IT infrastructure?


The NHS spent something like £1bn plus per year on IT even before the advent of NPfIT. But for all that expenditure it is questionable whether or not it has an IT infrastructure in the sense described by Susan Leigh Star and Karen Ruhleder (reference here , criteria set out below). Why is this important? Because until there is an infrastructure the NHS won't be in a position to benefit from IT, and expenditure will be fragmented and of uncertain purpose.

Infrastructure includes the following, and I've scored the NHS on each

  • Embedded (3/10 - computers and computing seem to be an anathema to many NHS staff)
  • Transparent ie does not need to be reinvented for each new task (2/10 much of the NHS computing estate consists of point solutions which need to be significantly modified as each new need arises - including BI systems)
  • Reach or scope beyond local practice (2/10 . This is an area where NPfIT has signally failed to make a case or implement change)
  • Learned as part of membership (1/10 . There are some steps are being taken to embed IT in professional life but little impact yet)
  • Links with conventions of practice (2/10 - a few honourable exceptions in primary care. Maybe PACS is on the way to becoming infrastructure)
  • Embodiment of standards (2/10 the keyword here being embodiment. And no, the use of TCP/IP doesn't count)
  • Built on an installed base (4/10 - once upon a time there was an installed base, for example regional computing centres. Nowadays there are a few elements of an installed base, such as N3 and NHSmail, NHS Choices, NHS Evidence)
  • Becomes visible upon breakdown (Comments about the breakdown of IT systems having no impact on patient care suggest 1/10 for this criteria, but in some situations a failure of an IT system would be significant, so 6/10)
A reasonable conclusion is that, with a few exceptions, despite huge expenditure the NHS lacks a an IT infrastructure. ux


bad user experience after logging in to

NHS 2.0


Derek Meyer has produced a timely critique of the NHS Summary Care Record and Healthspace - The report argues that SCR and Healthspace in their present format are doomed, but also that alternatives such as Google Health and HealthVault are unlikely to succeed either. The success of Web 2.0 points the way towards an NHS 2.0 approach, as outlined below. I think its worth discussing:

Flowdock looks interesting


Potentially useful...

A spin-off of Finnish software development company Nodeta, Flowdock aspires to help developers and others sift out actionable bits of knowledge from ongoing conversations and make them retrievable. Their team messenger services allows separation and tagging of conversational elements.

'In Flowdock, the epiphany comes when you tag a chat message for the first time,' Nodeta and Flowdock's CTO Otta Hilska wrote us. 'You realize how you just took a piece of conversation and turned it into a nugget of knowledge. Somebody talked about a bug, and you turned it into a bug report. Or pasted a snippet of code, and you categorized and organized it. The real validation for the concept comes when you are looking for some other snippet of code, a link to a partner, an eBook or something else and come to think 'I wonder if it's tagged in Flowdock'. Sure enough it will be.''


Designed for groups, Flowdock attempts to address a new kind of information overload, the one that intensified when social media tools began to be adopted by exponentially more people. The theory is that by tagging bits of the conversation, they are made discreet and retrievable based on folksonomy.

(image) "