Tag Archives: impact on people

Organ Donor Records Mix-up

The Sunday Times reported in April 2010 that NHS Blood and Transplant, who run the UK organ donor register, last year wrote to new donors with their consent details. After respondents complained the information was incorrect it was discovered 800,000 individuals’ details had been recorded incorrectly. 45 of those affected have since died and their incorrect wishes carried out!

“The mistake occurred in 1999 when a coding error on driving licences wrongly specifying donors’ wishes was transferred to the organ registry.”

400,000 of the affected records have been changed, and the remaining 400,000 people will be contacted soon and asked to update their consent.

Police Untelligence

From The Register comes this wonderful example of the problems that can arise where data is used for unintended purposes, resulting in poor quality outcomes for all involved.

The NYPD have been regularly raiding the home of an elderly Brooklyn couple. They’ve been hit 50 times over the past 4 years, which might mark them out as leading crime kingpins but for the fact that their address has wound up included in police data used to test notification systems. The Reg tags this as “a glitch in one of the department’s computers”, but Information Quality trainwreck observers will immediately recognise that the problem isn’t with the technology but with the Information.

The trainwreck is compounded by two facts which emerge in the article:

  1. NYPD believed that they had removed the couple’s address from the system back in 2007, but it appears to have not been the case (or perhaps it was restored from a backup)
  2. The solution the NYPD have now implemented is to put a flag on the couple’s address advising officers NOT to respond to calls to that address.

The latter “solution” echoes many of the pitfalls information quality professionals encounter on a daily basis where a “quick fix” is put in to address a specific symptom which then triggers (as el Reg puts it) “the law of unintended consequences”.  To cut through implication and suggestion, let’s pose the question – what happens if there is an actual incident at this couple’s home which requires a police response?

What might the alternative approaches or solutions be to this?

(And are the NYPD in discussions with the Slovak Border police about the perils of using live data or live subjects for testing?)

No smoke without ire – Life Insurance Overcharging in Ireland

RTE News in Ireland ran a story last night on overcharging by Irish Life Assurance companies arising from a mis-classification of customers as smokers. (link to the item is here, but you may not be able to access it if you are not in Ireland).

On foot of two complaints, the Irish Financial Services Ombudsman investigated two companies and has identified up to 500 customers affected.  However more customers may be affected in other companies .

The two companies affected blamed “computer and administrative errors” for the misclassification and the resulting overcharging. In other words, an Information Quality problem.

The financial impact for the two customers who complained was between €1100 and €2500 on policies of different lengths. Taking a crude average value, this would suggest that for the 500 cases the Ombudsman suspects in the two companies he looked at the total cost of refunds will be in the order of €900,000.

The cost of the investigation of possible errors and the correction of records would, of course, be on top of this amount.

The Financial Services Obmudsman has asked the Irish Financial Services Regulator to conduct an industry wide audit of all Life Assurance companies to identify further instances of this kind of overcharging based on misclassification of customers. As a result, the total amount of refunds will inevitably rise, as will the cost to the industry of inaccurate information.

The news report makes no mention of the potential Data Protection issues arising here under Irish Data Protection law, which does require information to be kept accurate and up to date. But the Irish Financial Ombudsman used to be the Data Protection Commissioner, so I am sure he has flagged that to the affected institutions himself.

An Airtravel trainwreck near-miss

From today’s Irish Independent comes a story which clearly shows the impact that poor quality information can have on a process or an outcome. The tale serves to highlight the fact that information entered as part of a process can feed into other processes and result in a less than desirable outcome.

On 20th March 2009, poor quality information nearly resulted in the worst air traffic disaster in Australian history as an Airbus A340-500 narrowly avoided crashing on take off into a residential area of Melbourne. The aircraft sustained damage to its tail and also caused damage to various lights and other systems on the runway of the airport at Melbourne.

The provisional report of the Australian Air Crash investigation found that the root cause for the incident was the inputting of an incorrect calculation for the weight of the aircraft of 262 tonnes, where as the plane was actually 362 tonnes in weight. This affected the calculations for airspeed required for take-off and the necessary thrust required to reach that speed.

The end  result was that the plane failed to take off correctly and gain height as required, resulting in the tail of the plane impacting on the runway and then proceeding to plough through a lighting array and airport instruments at the end of the runway.

It is interesting, from an Information Quality perspective, to read the areas that the Accident Investigation team are looking at for further investigation (I’ve put the ones of most interest in Bold text, and the full report is available here):

  • human performance and organisational risk controls, including:
    • data entry
    • a review of similar accidents and incidents
    • organisational risk controls
    • systems and processes relating to performance calculations
  • computer-based flight performance planning, including:
    • the effectiveness of the human interface of computer based planning tools.
  • reduced power takeoffs, including:
    • the risks associated with reduced power takeoffs and how they are  managed
    • crew ability to reconcile aircraft performance with required takeoff performance, and the associated decision making of the flight crew
    • preventative methods, especially technological advancements.

The Report by the Australian authorities also contains reference to some of the migitations that the aircraft operator was considering to help prevent a recurrence of this risk:

  • • human factors – including review of current pre-departure, runway performance calculation and cross-check procedures; to determine if additional enhancement is feasible and desirable, with particular regard to error tolerance and human factors issues.
  • training – including review of the initial and recurrent training in relation to mixed fleet flying and human factors.
  • fleet technical and procedures – including introduction of a performance calculation and verification system which will protect against single data source entry error by allowing at least two independent calculations.
  • hardware and software technology – including liaising with technology providers regarding systems for detecting abnormal take-off performance.

For those of us familiar with Information Quality practices, this is an impressive haul of information quality management improvement actions focussed on ensuring that this type of near-miss never happens again. It is doubly interesting that causes of poor quality information feature in the items that are subject to further investigation (e.g. “human factors”, risk controls etc.) and common approaches to resolution or prevention of information quality problems form 75% of the action plan put forward by the operator (process enhancement, improved checking of accuracy/validity, assuring consistency with other facts or measures etc).

No child left behind (except for those that are)

Steve Sarsfield shares with us this classic tale of IQ Trainwreck-ry from Atlanta Georgia.

An analysis of student enrollment and transfer data carried out by the Atlanta Journal-Constitution reveals a shocking number of students who appear to be dropping out of school and off the radar in Georgia.  This suggests that the dropout rate may be higher and the graduation rate lower than previously reported.

Last year, school staff marked more than 25,000 students as transferring to other Georgia public schools, but no school reported them as transferring in, the AJC’s analysis of enrollment data shows.

Analysis carried out by the State agency responsible was able to track down some of the missing students. But poor quality information makes any further tracking problematic if not impossible.

That search located 7,100 of the missing transfers in Georgia schools, state education spokesman Dana Tofig wrote in an e-mailed statement. The state does not know where an additional 19,500 went, but believes other coding errors occurred, he wrote. Some are dropouts but others are not, he said.

In a comment which should warm the hearts of Information Quality professionals everywhere, Cathy Henson, a Georgia State education law professor and former state board of education chairwoman says:

“Garbage in, garbage out.  We’re never going to solve our problems unless we have good data to drive our decisions.”

She might be interested in reading more on just that topic in Tom Redman’s book “Data Driven”.

Drop out rates consitute a significant IQ Trainwreck because:

  • Children who should be helped to better education aren’t. (They get left behind)
  • Schools are measured against Federal Standards, including drop out rates, which can affect funding
  • Political and business leaders often rely on these statistics for decision making, publicity,  and campaigning.
  • Companies consider the drop out rate when planning to locate in Georgia or elsewhere as it is an indicator of future skills pools in the area.

The article quotes Bob Wise on the implications of trying to fudge the data that sums up the impact of masking drop outs by miscoding (by accident or design):

“Entering rosy data won’t get you a bed of roses,” Wise said. “In a state like Georgia that is increasingly technologically oriented, it will get you a group of people that won’t be able to function meaningfully in the workforce.”

The article goes on to highlight yet more knockon impacts from the crummy data and poor quality information that the study showed:

  • Federal standard formulae for calculation of dropouts won’t give an accurate figure if there is mis-coding of students as “transfers” from one school to another.
  • A much touted unique student identifier has been found to be less than unique, with students often being given a new identifier in their new school
  • Inconsistencies exist in other data, for example students who were reported “removed for non-attendance” but had  zero absent days recorded against them.

Given the impact on students, the implications for school rankings and funding, the costs of correcting errors, and the scale and extent of problems uncovered, this counts as a classic IQTrainwreck.

The terror of the Terrorist Watch list

A source who wishes to remain anoynymous sent us this link to a story on Wired.com about the state of the US Government’s Terrorist watch list.

The many and varied problems with the watch list have been covered on this blog before.

However, the reason that this most recent story constitutes an IQTrainwreck is that it seems that, despite undertakings to improve quality, the exact opposite has actually happened given:

  • The growth in the number of entries on the list
  • The failures on the part of the FBI to properly maintain and update information in a timely manner.

According to the report 15% of active terrorism suspects under investigation were not added to the Watch list. 72% of people cleared in closed investigations were not removed.

The report from the US Inspector General said that they “believe that the FBI’s failure to consistently nominate subjects of international and domestic terrorism investigations to the terrorist watchlist could pose a risk to national security.”

That quote sums up why this is an IQTrainwreck.

Continue reading

Double Debits – directly. (Another banking IQTrainwreck)

Courtesy of our Irish colleagues over on Tuppenceworth.ie comes yet another tale of poor quality information in financial services. Although this time it is at the lower end of the scale, at least on a per customer basis. However, the impacts on a customer are still irksome and problematic. And the solution the bank has put in place is a classic example of why inspecting defects out of a process is never an exact or value adding science.

It seems that Bank of Ireland has recently introduced some new software. Unfortunately, a bug in the software has resulted in certain transactions (deductions) being posted multiple times to accounts, resulting in cash-strapped Irish people being more strapped for cash than they’d expected.

Simon McGarr, (one of the authors over at Tuppenceworth) sums up the story and the reason why this is an IQTrainwreck:

I spotted a double charge on my account, for a pretty significant sum of money (is there any other kind?).

When I rang up to query it, I was told Bank of Ireland have changed their computer systems recently (Two weeks or so).

As a result, some transactions are being applied to accounts twice if they were processed through Laser [a debit card system in Ireland — ed.], or if they were a Pass machine [what the Irish call ATMs –ed.] withdrawal.

They say that if you spot the double charge, and ring them up to complain, they’ll send an email to their programmers to reverse the second charge.

I suggested to the polite customer services person that the bank might want to warn their clients to be alert for these double charges, as they could suffer additional charges (from appearing to breach their overdraft limits, for example) unless they spotted the bank’s mistake.

(Emphasis is added by this author)

Simon goes on to add (in a comment) that he has been without the benefit of his hard earned cash for 10 days (and counting).

Continue reading

Apple App Store IQ Trainwreck

It appears that Apple iPhone App developers are having difficulty getting paid at the moment, according to this story from The Register. (Gizmodo.com carries the story here, Techcrunch.com has it here,

According to The Register:

A backlog in Apple’s payment processing system has left some iPhone developers still waiting for February’s payments, leaving some at risk of bankruptcy and considering legal action against the lads in Cupertino.

Desperate developers have been told to stop e-mailing the iTunes finance system and to wait patiently for their money – in some cases tens of thousands of dollars – while Apple sorts things out.

It would appear from comments and coverage elsewhere that this problem has been occurring for some developers for longer (since late 2008 according to the TechCrunch article and this article from eequalsmcsquare.com (an iphone community site))

The article goes on to explain that:

According to postings on the iPhone developer community Apple has been blaming bank errors and processing problems for the delays. Complainants are being told that payments have been made, that bank errors have caused rejections[.]

One commenter on the story on The Register, commenting anonymously, attempts to shed some light on this with an explanation that, from an Information Quality point of view, sounds plausible.

  • Two American banks merged (was it Washington Mutual and Chase?) and the SWIFT code for the customers of one had to change. The bank didn’t tell the customers and Apple had the payments refused. Apple seem to be manually changing the codes in the payment system, but that’s separate from the web interface where devs enter their bank details.
  • A lot of American banks don’t have SWIFT codes at all. Royalties from e.g. EU sales are sent from Apple (Luxembourg) S.A.. The chances of this money arriving at Bank Of Smalltown seem slim at best.

This what we have here is a failure to manage master data correctly it seems, and also a glaring case of potentially incomplete data which would impact the ability for funds to flow freely from the App Store to the Developers.

The Anonymous commenter’s explanation would seem to hold water because Apple are claiming that “bank errors have caused rejections”. Having had some experience with electronic funds transfer processes, one of the reasons a funds transfer would fail would be if the data used was incorrect, inconsistent or inaccurate. This would happen if the SWIFT codes of Bank A had to change (or if Bank A and Bank B had to have new codes issued).

However, some commenters based in the EU have reported that they have given Apple updated bank details and are still awaiting payment, which suggests there may be yet another potential root cause at play here that may yet come to light.

Apple still owes me more than $7,500 since September 2008 for US and World regions. I supplied them with a new SWIFT code and a intermediary bank they could use last month, but still nothing. Sent them tons of emails but I never got to know what is really wrong/faulty so I just tried to give them another SWIFT code that DNB (Biggest bank in Norway) uses. All other region payments have been OK.” (quote from comment featured on this article)

So, for the potential impact on iPhone Apps developers cash flow, and the PR impact on one of Apple’s flagship services, and the fact that management of the accuracy, completeness and consistency of key master data for a process, this counts as an IQ Trainwreck.

These are the IQ trainwrecks in your neighbourhood

Stumbled upon this lovely pictorial IQTrainwreck today on Twitter. Thanks to Angela Hall (@sasbi) for taking the time to snap the shot and tweet it and for giving us permission to use it here. As Angela says on her Twitpic tweet:

Data quality issue in the neighborhood? How many street signs (with diff names) are needed? Hmmmm

Data quality issue in the neighborhood? How many street signs... on Twitpic In the words of Bob Dylan: “How many roads must a man walk down?”

Google Health – Dead on Arrival due to duff data quality?

It would seem that poor quality information has caused some definitely embarassing and potentially risky outcomes in Google’s new on-line Patient Health Record service. The story has featured (amongst other places) :

  • Here (Boston.com, the website of the Boston Globe)
  • Here  (InformationWeek.com’s Global CIO Blog)

‘Patient Zero’ for this story was this blog post by “e-patient Dave” over at e-patient.net. In this blog post “e-Patient Dave” shared his experiences migrating his personal health records over to Google Health. To say that the quality of the information that was transferred was poor is an understatement. Amongst other things:

Yes, ladies and germs, it transmitted everything I’ve ever had. With almost no dates attached.

So, to someone looking at e-Patient Dave’s medical records in Google Health it would appear that his middle name might be Lucky as he had every ailment he’s ever had… at the same time.

Not only that, for the item where dates did come across on the migration, there were factual errors in the data. For example, the date given for e-Patient Dave’s cancer diagnosis was out by four months. To cap things off, e-patient Dave tells us that:

The really fun stuff, though, is that some of the conditions transmitted are things I’ve never had: aortic aneurysm and mets to the brain or spine.

The root cause that e-Patient Dave uncovered by talking to some doctors was that the migration process transferred billing code data rather than actual diagnostic data to Google Health. As readers of Larry English’s Improving Data Warehouse and Business Information Quality will know, the quality of that data isn’t always *ahem* good enough. As English tells us:

An insurance company discovered from its data warehouse, newly loaded with claims data, that 80% of the claims from one region were paid for a claim with a medical diagnosis code of  “broken leg”. Was that region a rough neighborhood? No, claims processors were measured on how fast they paid claims, rather than for accurate claim information. Only needing a “valid diagnosis code” to pay a claim, they frequently allowed the system to default to a value of “broken leg”.

(Historical note: while this example features in Larry’s book, it originally featured in an article he wrote for DM-Review (now Information-Management.com) back in 1996.)

“e-patient Dave” adds another wrinkle to this story..

[i]f a doc needs to bill insurance for something and the list of billing codes doesn’t happen to include exactly what your condition is, they cram it into something else so the stupid system will accept it.) (And, btw, everyone in the business is apparently accustomed to the system being stupid, so it’s no surprise that nobody can tell whether things are making any sense: nobody counts on the data to be meaningful in the first place.)

To cap it all off, a lot of the key data that e-Patient Dave expected to see transferred wasn’t there, and of what was transferred the information was either inaccurate or horridly incomplete:

  • what they transmitted for diagnoses was actually billing codes
  • the one item of medication data they sent was correct, but it was only my current BP med. (Which, btw, Google Health said had an urgent conflict with my two-years-ago potassium condition, which had been sent without a date). It sent no medication history, not even the fact that I’d had four weeks of high dosage Interleukin-2, which just MIGHT be useful to have in my personal health record, eh?
  • the allergies data did NOT include the one thing I must not ever, ever violate: no steroids ever again (e.g. cortisone) (they suppress the immune system), because it’ll interfere with the immune treatment that saved my life and is still active within me. (I am well, but my type of cancer normally recurs.)
  • So, it would seem that information quality problems that have been documented in the information quality literature for over a decade are at the root of an embarassing information quality trainwreck that could (potentially) have an affect on how a patient might be treated at a new hospital – considering they have all these ailments at once but appear asypmtomatic. To cap it all off, failures in the mapping of critical data resulted in an electronic patient record that was dangerously inaccurate and incomplete.

    Hugh Laurie as Dr. Gregory House

    Hugh Laurie as Dr. Gregory House

    What would Dr. Gregory House make of e-Patient Dave’s notes?

    e-Patient Dave’s blog post makes interesting reading (and at 2800 words + covers a lot of ground). He details a number of other reasons why quality problems exist in electronic patient records and why :

    • nobody’s in the habit of actually fixing errors. (he cites an x-ray record that shows him to be a female)
    • processes for data integrity in healthcare are largely absent, by ordinary business standards. I suspect there are few, if any, processes in place to prevent wrong data from entering the system, or tracking down the cause when things do go awry.
    • Data doesn’t seem to get transferred consistently from paper forms to electronic records (specficially e-Patient Dave’s requirement not to have steriods).
    • Lack of sufficient edit controls and governance over data and patient records, including audit trails.

    e-Patient Dave is at pains to make it clear that the problem isn’t with Google Health. The problem is with the data that was migrated across to Google Health from his existing electronic patient record.

    Google Health – DOA after an IQ Trainwreck.?