Having been 'Down Under' in Sydney addressing the Health Informatics Society of Australia on the need to slow down their national health IT program - and on the need to think critically about HIT seller public relations exaggerations and hubris - and being very busy, I missed this quite stunning story of a major health IT outage.
Just a typical "glitch":
Some lessons from a major outage
Posted on July 31, 2012
By Tony Collins
Last week Cerner had a major outage across the US. Its international customers might also have been affected.
InformationWeek Healthcare reported that Cerner’s remote hosting service went down for about six hours on Monday, 23 July. It hit “hospital and physician practice clients all over the country”. Information Week said the unusual outage “reportedly took down the vendor’s entire network” and raised “new questions about the reliability of cloud-based hosting services”.
A Cerner spokesperson Kelli Christman told Information Week,
“Cerner’s remote-hosted clients experienced unscheduled downtime this week. Our clients all have downtime procedures in place to ensure patient safety. [Meaning, for the most part, blank paper - ed.] The issue has been resolved and clients are back up and running. A human error caused the outage. [I don't think they mean human error as in poor disaster recovery and business continuity engineering - ed.] As a result, we are reviewing our training protocol and documented work instructions for any improvements that can be made.”
Christman did not respond to a question about how many Cerner clients were affected. HIStalk, a popular health IT blog, reported that hospital staff resorted to paper [if that was true, that paper was OK in an unplanned workflow disruption of major proportions, then why do we need to spend billions on health IT, one might ask? - ed.] but it is unclear whether they would have had access to the most recent information on patients.
One Tweet by @UhVeeNesh said “Thank you Cerner for being down all day. Just how I like to start my week…with the computer system crashing for all of NorCal [Northern California].”
Tony Collins is a commentator for ComputerWorldUK.com. He's quoted me, as I wrote in my May 2011 post Key lesson from the NPfIT - The Tony Collins Blog.
This incident brings to life longstanding concerns about hospitals outsourcing their crucial functions to IT companies.
Quite simply, I think it's insane, at least in the foreseeable future, as this example shows.
It also brings to mind the concerns that health IT, as an unregulated technology, causes dangers in hospitals with inadequate internal disaster and business continuity functions aside from fresh sheets of paper. Such capabilities would likely be mandatory if health IT were meaningfully regulated.
The Joint Commission, for example, likely issued its stamp of approval for the affected hospitals, hospitals who had outsourced their crucial medical records functions to an outside party that sometimes went mute. If someone was injured or died due to this outage, they would not care very much about the supposed advantages.
There's this in the article:
... “Issue appears to have something to do with DNS entries being deleted across RHO network and possible Active Directory corruption. Outage was across all North America clients as well as some international clients.”
Of course, patient safety was not compromised.
Finally:
Imagine being a patient, perhaps with a complex history, in extremis at the time of this outage.
I, for one, do not want my own medical care nor that of my relatives and friends subject to cybernetic recordkeeping unreliability and incompetence like this, and the risk it creates.
-- SS
Aug. 8, 2012 addendum:
The Los Angeles Times covered this outage in a story aptly entitled "Patient data outage exposes risks of electronic medical records."
They write:
Dozens of hospitals across the country lost access to crucial electronic medical records for about five hours during a major computer outage last week, raising fresh concerns about whether poorly designed technology can compromise patient care.
My only comment is that the answer to this question is rather axiomatic.
They also quote Jacob Reider, acting chief medical officer at the federal Office of the National Coordinator for Health Information Technology, who said:
"These types of outages are quite rare and there's no way to completely eliminate human error"
This is precisely the type of political spin and misdirection I cautioned the Australian health authorities to evaluate critically.
Paper, unless there is a mass outbreak of use of disappearing ink, or locally hosted clinical IT, do not go blank en masse across multiple states and countries for any length of time, raising risk across multiple hospitals greatly, acutely and simultaneously. (Locally hosted IT outages only cause "local" mayhem; see my further thoughts on this issue here).
-- SS
1 comments:
I have been doing research on electronic medical records software. I have always been thinking what if the system went down. I thought it would be a mess, and from your article I guess it was to an extent. I guess this is a lesson learned, on what you can do if it happens again. Thanks for sharing!
Post a Comment