Tag Archives: finger flub

8 year old orphaned by a fat finger key stroke error

Daragh O Brien has written and presented in the past for the IAIDQ on the topic of how the legal system and information quality management often look at the same issues from a different perspective, ultimately to identify how to address the issues of the cost and risk of poor quality.

This was brought home very starkly this morning in a case from the UK High Court which has opened the possibility of six figure damages being awarded to an 8 year old boy who was orphaned by a data quality error.

A single key stroke error on a computer cost a mother her life from breast cancer and left her eight-year-old son an orphan, the High Court has heard.

Two urgent letters informing the single mother of hospital appointments were sent to the wrong address – because the number of her home was typed as ’16’, instead of ‘1b’.

Read more: http://www.dailymail.co.uk/news/article-1366056/Mistyped-address-leaves-mother-dead-cancer-son-8-orphan.html#ixzz1GfRPOOHJ

In a tragic series of events a young mother discovered a lump on her breast. She was treated in hospital and given the all clear, but continued to be concerned. Her GP arranged further tests for her but she never received the letters due to a simple mis-keying of her address which meant she never received her appointment letters. As her cancer went untreated for a further 12 months by the time she was diagnosed her only treatment option was palliative care. Had she been treated in time, the Court heard, she would have had a 92% chance of survival for another 10 years.

Her doctor admitted liability arising from the failure of the surgery to follow up with the the woman on her tests, which might have uncovered that she hadn’t received the letters.

The Court dismissed an argument by the defence that the woman should have followed up herself, on the grounds that, while they would never know what had been in her mind, she had already been given an “all clear” and that she was likely either trying to get on with her life or may have been scared to return to the doctor.

A key lesson to be learned here is that ensuring accurate information is captured at the beginning of a process is critical. Equally critical is the need for organisations where the data is potentially of life and death importance to ensure that there is follow up where the process appears to have stalled (for example if expected test results are not received back from a hospital).

A simple error in data input, and a failure of or lack of error detection processes, has been found by the UK High Court to be the root cause for the death of a young mother and the orphaning of an 8 year old boy.  This is a SIGNIFICANT legal precedent.

Also, the case raises Data Protection Act compliance issues for the GP practice as sensitive personal data about a (now deceased) patient was sent to the wrong address.

RELATED POST: Daragh O Brien has a related post on his personal blog from 2009 about how Information Quality is getting some interesting legal support in the English legal system.

An Airtravel trainwreck near-miss

From today’s Irish Independent comes a story which clearly shows the impact that poor quality information can have on a process or an outcome. The tale serves to highlight the fact that information entered as part of a process can feed into other processes and result in a less than desirable outcome.

On 20th March 2009, poor quality information nearly resulted in the worst air traffic disaster in Australian history as an Airbus A340-500 narrowly avoided crashing on take off into a residential area of Melbourne. The aircraft sustained damage to its tail and also caused damage to various lights and other systems on the runway of the airport at Melbourne.

The provisional report of the Australian Air Crash investigation found that the root cause for the incident was the inputting of an incorrect calculation for the weight of the aircraft of 262 tonnes, where as the plane was actually 362 tonnes in weight. This affected the calculations for airspeed required for take-off and the necessary thrust required to reach that speed.

The end  result was that the plane failed to take off correctly and gain height as required, resulting in the tail of the plane impacting on the runway and then proceeding to plough through a lighting array and airport instruments at the end of the runway.

It is interesting, from an Information Quality perspective, to read the areas that the Accident Investigation team are looking at for further investigation (I’ve put the ones of most interest in Bold text, and the full report is available here):

  • human performance and organisational risk controls, including:
    • data entry
    • a review of similar accidents and incidents
    • organisational risk controls
    • systems and processes relating to performance calculations
  • computer-based flight performance planning, including:
    • the effectiveness of the human interface of computer based planning tools.
  • reduced power takeoffs, including:
    • the risks associated with reduced power takeoffs and how they are  managed
    • crew ability to reconcile aircraft performance with required takeoff performance, and the associated decision making of the flight crew
    • preventative methods, especially technological advancements.

The Report by the Australian authorities also contains reference to some of the migitations that the aircraft operator was considering to help prevent a recurrence of this risk:

  • • human factors – including review of current pre-departure, runway performance calculation and cross-check procedures; to determine if additional enhancement is feasible and desirable, with particular regard to error tolerance and human factors issues.
  • training – including review of the initial and recurrent training in relation to mixed fleet flying and human factors.
  • fleet technical and procedures – including introduction of a performance calculation and verification system which will protect against single data source entry error by allowing at least two independent calculations.
  • hardware and software technology – including liaising with technology providers regarding systems for detecting abnormal take-off performance.

For those of us familiar with Information Quality practices, this is an impressive haul of information quality management improvement actions focussed on ensuring that this type of near-miss never happens again. It is doubly interesting that causes of poor quality information feature in the items that are subject to further investigation (e.g. “human factors”, risk controls etc.) and common approaches to resolution or prevention of information quality problems form 75% of the action plan put forward by the operator (process enhancement, improved checking of accuracy/validity, assuring consistency with other facts or measures etc).