Dublin bank bungled foreign exchange transaction (in 2001)

Following on from this morning’s story about the New Zealand overdraft fiasco, a few further cases of Information Quality trainwrecks in Financial services have come to our attention.

This first one is from 2001 and was found on the BBC.co.uk website, with further reporting from The Telegraph

Bank bungles pesetas/euros

Back in 2001, David Hickey was emigrating from Ireland to Spain. He asked his bank to change IR£1500 into pesetas, but an error in the bank meant that the  amount transferred was in euros, not pesetas. IR£1500 was approximately 300,000 pesetas. Mr Hickey received EUR300,000 into his account.

The bank eventually had to take legal action in Spain to freeze Mr Hickey’s accounts with a view to getting the money back.

At the time, the bank declined to comment further to the media on the matter and the Irish police were of the view that no criminal offence had taken place because of the ‘technical error’ (i.e. IQTrainwreck) involved.

Other Trainwrecks

We’re researching the other IQTrainwrecks that came to light this morning on this theme, not least to make sure we haven’t covered them here already. Expect further updates in the coming days.

Antipodean Bankers Sheepish over Overdraft Bungle (Again)

New Zealand Couple do bunk with bungled overdraft funds

Courtesy of  @Firstlink and @DataQualityPro on Twitter, and hot of the presses at BBC News comes this case of an IQTrainwreck in New Zealand.

Police are hunting for a pair of nefarious desperadoes who took advantage of a poor unsuspecting bank which deposited NZ$10 Million in their account instead of the NZ$10,000 that they had requested. That’s a difference of three zeroes, but in the wrong place.

The couple, it seems, have withdrawn an undisclosed amount of the money and appear to have left the country. They are being pursued by the authorities through Interpol, and it seems the Australian bank that gave them the money by accident are eager to have it back:

Westpac media relations manager Craig Dowling said the bank was “pursuing vigorous criminal and civil action to recover a sum of money stolen”.

Now, this is not the first time that something like this has happened in the Antipodes, and the last time it was WestPac who were over-generous with their credit.

Oops we did it again before

Back in September 2007 we carried the story of New South Wales business man Victor Ollis who, as a result of an undetected (by WestPac) error had benefited to the tune of AUD$11 Million. The quote below is from the original story on Australia’s news.com.au

Mr Ollis had an automatic transfer facility with the bank, which topped up his business account using funds from his personal account.

The transfers should have been stopped after his personal account was overdrawn in February 2004, the court heard yesterday.

But due to an error at Westpac, his account continued to be replenished – only with money “from the bank’s own pocket”.

Between June and December 2005, Westpac honoured cheques totalling about $11 million written by Mr Ollis.

WestPac sued Mr Ollis and were awarded their money back plus interest.  However, as Mr Ollis was apparently terminally ill at the time, there is a chance that they never got their money back.

A proud tradition in banking

However, WestPac are not alone in the pantheon of banking information quality trainwrecks. (We won’t talk about the current Global Financial Crisis and how some of its roots can be traced back to poor quality information… not yet anyway).

My personal favourite from our archive of Banking Information Quality Trainwrecks has to be this one though…

  • From Australia in  December 2007 – “Cat Gets Credit Card“. In this case it wasn’t WestPac but the Bank of Queensland who goofed.

But we’re only scratching the surface

Here at IQTrainwrecks.com we know we are only scratching the surface of these issues. Please contact us with your examples of Information Quality Trainwrecks (particulary in banking) so we can add them to the Roll of Honour.

Kid from 6th Sense works on Economic Stimulus – he stimulates dead people

The bad joke in the headline aside, this story (which comes to us via Initiate Systems on Twitter, who linked to it from WBALTV in Baltimore USA) reveals a common type of IQ Trainwreck – the “sending things to dead people” problem.

As we know, the US Government has been sending out Stimulus Cheques (or Checks, if you are in the US) to people to help stimulate consumer spending in the US economy. Kind of like a defibrillator for consumer confidence.

Initiate Systems picked up on the story of a cheque that was sent to Mrs Rose Hagner. Her son found it in the mail and was a bit surprised when he saw it. After all, he’s 83 years old and his mother has been dead for over 40 years. Social Security officials give the following explanation:

Of the about 52 million checks that have been mailed out, about 10,000 of those have been sent to people who are deceased.

The agency blames the error on the strict mid-June deadline of mailing out all of the checks, which didn’t leave officials much time to clean up all of their records.

Of course, one might ask why this was such a challenge when the issue raised its head in 2008 as well when a cheque was mailed to a man in Georgia which was made out to a Mr George Croker DECD (an abbreviation for deceased). The story, which was picked up by SmartPros.com at the time (and for the life of us we can’t see how it slipped under our radar), describes the situation as follows:

Richard Hicks, a Fulton County magistrate, says the $600 check arrived in Roswell this week and was made out to George A. Coker DECD, which, of course, stands for “deceased.”

Coker obviously won’t be able to do his bit to spur the consumer economy, which has Hicks puzzled and somewhat miffed.

“There’s a $9 trillion national debt and our government’s giving away money to dead people,” he told The Atlanta Journal-Constitution. “As a taxpayer, it offends the hell out of me.”

The Internal Revenue Service in Atlanta told the newspaper it didn’t know how many other DECD checks have been written nationwide since the 2007 returns are still being processed.

So, the issue has existed since at least 2008 and relates to data being used for a new purpose (sending cheques on a blanket basis). It would seem the solution that is being attempted is to inspect the errors out of the cheque batches before they are sent by the June dead-line. A better solution might be to:

  1. Apply some business rules to the process, for example “If recipient is older than 120 then verify – as the oldest person in the world is currently 115), or parse the name string to determine which social security records end with “DECD” or any other standard variant abbreviation for “deceased”.
  2. Embed these checks (not cheques) into the process for managing the master data set rather than applying them at ‘point of use’.

Building quality into a process, and into the information produced by and consumed by a process, reduces the risk of embarrassing information quality errors. Cleaning and correcting errors or exceptions as a bulk batch process is not as value-adding as actually improving your processes to prevent poor quality information being created or being acted on.

Why is this an IQ Trainwreck?

Well, the volumes of records affected and the actual cost are quite low so one could argue that the information is “close enough for government work”. However, government work tends to get political and a google search on this topic has thrown up a lot of negative political comment from opponents of the stimulus plan.

The volume and actual cost may be low, but the likely impact in terms of PR impact and time that might be required to explain the issue in the media highlights the often overlooked cost and impact of poor quality information – reputation and credibility.

Tax disc mailings… on the Double

From our ever vigilant sources over at The Register comes this story of duplicated information resulting in confusion and costs to the UK Taxpayer.

It seems that the UK DVLA has issued duplicate tax discs to concientous motorists who renewed their motor tax on-line.

A DVLA spokesman told theRegister: “As a result of an error, a number of customers, who recently purchased tax discs on line or by phone, were issued with duplicate tax discs.

“Once the problem was identified, swift action was taken to rectify it. All customers affected are being sent a letter of apology and the erroneous discs have been cancelled.”

So. Let’s sum this one up…

  1. Poor quality information in a process resulted in the normal cost of the Motor tax process being higher than it should (because of duplicate postage and printing costs for the certificates sent in error).
  2. A further printing and postage expense will be incurred to apologise to motorists for the confusion
  3. Analysis will need to be done to identify all the affected motorists, which will require staff to be diverted from other duties or increased costs due to overtime or external IT contractor spend
  4. People might bin the wrong tax disc and find themselves technically in breach of the law.

This is a simple example of the costs to organizations of poor quality information. A classic IQTrainwreck scenario.

Apple App Store IQ Trainwreck

It appears that Apple iPhone App developers are having difficulty getting paid at the moment, according to this story from The Register. (Gizmodo.com carries the story here, Techcrunch.com has it here,

According to The Register:

A backlog in Apple’s payment processing system has left some iPhone developers still waiting for February’s payments, leaving some at risk of bankruptcy and considering legal action against the lads in Cupertino.

Desperate developers have been told to stop e-mailing the iTunes finance system and to wait patiently for their money – in some cases tens of thousands of dollars – while Apple sorts things out.

It would appear from comments and coverage elsewhere that this problem has been occurring for some developers for longer (since late 2008 according to the TechCrunch article and this article from eequalsmcsquare.com (an iphone community site))

The article goes on to explain that:

According to postings on the iPhone developer community Apple has been blaming bank errors and processing problems for the delays. Complainants are being told that payments have been made, that bank errors have caused rejections[.]

One commenter on the story on The Register, commenting anonymously, attempts to shed some light on this with an explanation that, from an Information Quality point of view, sounds plausible.

  • Two American banks merged (was it Washington Mutual and Chase?) and the SWIFT code for the customers of one had to change. The bank didn’t tell the customers and Apple had the payments refused. Apple seem to be manually changing the codes in the payment system, but that’s separate from the web interface where devs enter their bank details.
  • A lot of American banks don’t have SWIFT codes at all. Royalties from e.g. EU sales are sent from Apple (Luxembourg) S.A.. The chances of this money arriving at Bank Of Smalltown seem slim at best.

This what we have here is a failure to manage master data correctly it seems, and also a glaring case of potentially incomplete data which would impact the ability for funds to flow freely from the App Store to the Developers.

The Anonymous commenter’s explanation would seem to hold water because Apple are claiming that “bank errors have caused rejections”. Having had some experience with electronic funds transfer processes, one of the reasons a funds transfer would fail would be if the data used was incorrect, inconsistent or inaccurate. This would happen if the SWIFT codes of Bank A had to change (or if Bank A and Bank B had to have new codes issued).

However, some commenters based in the EU have reported that they have given Apple updated bank details and are still awaiting payment, which suggests there may be yet another potential root cause at play here that may yet come to light.

Apple still owes me more than $7,500 since September 2008 for US and World regions. I supplied them with a new SWIFT code and a intermediary bank they could use last month, but still nothing. Sent them tons of emails but I never got to know what is really wrong/faulty so I just tried to give them another SWIFT code that DNB (Biggest bank in Norway) uses. All other region payments have been OK.” (quote from comment featured on this article)

So, for the potential impact on iPhone Apps developers cash flow, and the PR impact on one of Apple’s flagship services, and the fact that management of the accuracy, completeness and consistency of key master data for a process, this counts as an IQ Trainwreck.

These are the IQ trainwrecks in your neighbourhood

Stumbled upon this lovely pictorial IQTrainwreck today on Twitter. Thanks to Angela Hall (@sasbi) for taking the time to snap the shot and tweet it and for giving us permission to use it here. As Angela says on her Twitpic tweet:

Data quality issue in the neighborhood? How many street signs (with diff names) are needed? Hmmmm

Data quality issue in the neighborhood? How many street signs... on Twitpic In the words of Bob Dylan: “How many roads must a man walk down?”

I am who I am, except when I’m not.

Steve Tuck, writing over at SmartDataCollective, shares a tale of an embarassing IQTrainwreck involving his brother. The root cause of Steve’s tale is ‘false positives’ in matching, but it goes to show how simple assumptions or errors in the management of the quality of information can lead to unforeseen and undesired consequences.

Steve’s brother had checked into a hotel for a trade conference. He went and had a shower and was quite surprised to come out of the bathroom to find another man standing in his room. It turned out that they both had the same name and were both attending the same event and the hotel had (despite all the other evidence to the contrary, like different companies and different credit card details) decided to merge the two reservations so two people wound up being booked into the same room.

The second Mr Tuck had to take his bags and go to a different hotel in the end, causing unnecessary aggravation for him (it is always nice to be staying in the hotel a conference is on in… the more relaxed pace over breakfast can help ease you into the day). Steve’s brother had the embarassment of being caught in little more than a towel.

For the embarassment factor and customer service impacts, this meets the criteria for an IQ Trainwreck. 

Thanks Steve.

This isn’t the first time we’ve covered this type of false positive IQ Trainwreck though. A scan of our archives brings up this story from 2007.

The Retail Data Nightmare

Over at SmartDataCollective, Daniel Gent has shared an excellent example of a very common scenario in organizations across the globe… the troubling matter of the duplicated, fragmented and inconsistent customer details.

He shares a story with his readers about a recent trip to a retail store which used his telephone number as the search string to find his customer profile. The search returned no fewer than 7 distinct customer records, all of them variations on a theme. Daniel enumerates the records thusly:

1) One past owner of the phone from over 15 years ago;
2) Three versions of my name;
3) Two versions of my wife’s name; and,
4) One record that was a joint name account.

The implications that Daniel identifies for this are direct and immediate costs to the business:

  • Multiple duplicate direct mailings per year offering services
  • Multiple call centre contacts offering yet more services
  • Potential problems with calling on his warranty for the goods he bought because the store can’t find which of his customer records the warranty details are associated to.

Of course, this is just the tip of the ice-berg.

Daniel’s experience is all too common. And it is a Retail Data Nightmare that can quickly turn into an Information Quality trainwreck if left unchecked.

#AmazonFail – a classic Information Quality impact

So, Amazon recently delisted thousands (over 57000 to be precise) of books from their search and sales rankings. The Wall Street Journal carries the story, as does the Irish Independent (here), The China Post (here), ComputerWorld (here), the BBC (here)…. and there are many more. In the Twitter-sphere and BlogSphere, this issue was tagged as #AmazonFail. (some blog posts on this can be found here and… oh heck, here’s a link to a google search with over 400,000 results). There are over 13000 seperate Twitter posts about it, including a few highlighting alternatives to Amazon.

We have previously covered Amazon IQTrainwrecks here and here. Both involved inappropriate pushing of Adult material to customers and searchers on the Amazon site. Perhaps they are just over-compensating now?

It appears that these books were mis-categorised as “Adult” material, which Amazon excludes from searches and sales rankings. Because the books were, predominantly but not exlusively, relating to homosexual lifestyles, this provoked a storm of comment that Amazon was censoring homosexual material. However books about health and reproduction were also affected.

Amazon describe the #AmazonFail incident as a “ham fisted” cataloguing error which they attribute to one employee entering data in one field in one system. One commentator ascribes the blame to an algorithm in Amazon’s ranking and cataloguing tools. And a hacker has claimed that it was he who did it, exploiting a vulnerability in Amazon’s website. Continue reading

Google Health – Dead on Arrival due to duff data quality?

It would seem that poor quality information has caused some definitely embarassing and potentially risky outcomes in Google’s new on-line Patient Health Record service. The story has featured (amongst other places) :

  • Here (Boston.com, the website of the Boston Globe)
  • Here  (InformationWeek.com’s Global CIO Blog)

‘Patient Zero’ for this story was this blog post by “e-patient Dave” over at e-patient.net. In this blog post “e-Patient Dave” shared his experiences migrating his personal health records over to Google Health. To say that the quality of the information that was transferred was poor is an understatement. Amongst other things:

Yes, ladies and germs, it transmitted everything I’ve ever had. With almost no dates attached.

So, to someone looking at e-Patient Dave’s medical records in Google Health it would appear that his middle name might be Lucky as he had every ailment he’s ever had… at the same time.

Not only that, for the item where dates did come across on the migration, there were factual errors in the data. For example, the date given for e-Patient Dave’s cancer diagnosis was out by four months. To cap things off, e-patient Dave tells us that:

The really fun stuff, though, is that some of the conditions transmitted are things I’ve never had: aortic aneurysm and mets to the brain or spine.

The root cause that e-Patient Dave uncovered by talking to some doctors was that the migration process transferred billing code data rather than actual diagnostic data to Google Health. As readers of Larry English’s Improving Data Warehouse and Business Information Quality will know, the quality of that data isn’t always *ahem* good enough. As English tells us:

An insurance company discovered from its data warehouse, newly loaded with claims data, that 80% of the claims from one region were paid for a claim with a medical diagnosis code of  “broken leg”. Was that region a rough neighborhood? No, claims processors were measured on how fast they paid claims, rather than for accurate claim information. Only needing a “valid diagnosis code” to pay a claim, they frequently allowed the system to default to a value of “broken leg”.

(Historical note: while this example features in Larry’s book, it originally featured in an article he wrote for DM-Review (now Information-Management.com) back in 1996.)

“e-patient Dave” adds another wrinkle to this story..

[i]f a doc needs to bill insurance for something and the list of billing codes doesn’t happen to include exactly what your condition is, they cram it into something else so the stupid system will accept it.) (And, btw, everyone in the business is apparently accustomed to the system being stupid, so it’s no surprise that nobody can tell whether things are making any sense: nobody counts on the data to be meaningful in the first place.)

To cap it all off, a lot of the key data that e-Patient Dave expected to see transferred wasn’t there, and of what was transferred the information was either inaccurate or horridly incomplete:

  • what they transmitted for diagnoses was actually billing codes
  • the one item of medication data they sent was correct, but it was only my current BP med. (Which, btw, Google Health said had an urgent conflict with my two-years-ago potassium condition, which had been sent without a date). It sent no medication history, not even the fact that I’d had four weeks of high dosage Interleukin-2, which just MIGHT be useful to have in my personal health record, eh?
  • the allergies data did NOT include the one thing I must not ever, ever violate: no steroids ever again (e.g. cortisone) (they suppress the immune system), because it’ll interfere with the immune treatment that saved my life and is still active within me. (I am well, but my type of cancer normally recurs.)
  • So, it would seem that information quality problems that have been documented in the information quality literature for over a decade are at the root of an embarassing information quality trainwreck that could (potentially) have an affect on how a patient might be treated at a new hospital – considering they have all these ailments at once but appear asypmtomatic. To cap it all off, failures in the mapping of critical data resulted in an electronic patient record that was dangerously inaccurate and incomplete.

    Hugh Laurie as Dr. Gregory House

    Hugh Laurie as Dr. Gregory House

    What would Dr. Gregory House make of e-Patient Dave’s notes?

    e-Patient Dave’s blog post makes interesting reading (and at 2800 words + covers a lot of ground). He details a number of other reasons why quality problems exist in electronic patient records and why :

    • nobody’s in the habit of actually fixing errors. (he cites an x-ray record that shows him to be a female)
    • processes for data integrity in healthcare are largely absent, by ordinary business standards. I suspect there are few, if any, processes in place to prevent wrong data from entering the system, or tracking down the cause when things do go awry.
    • Data doesn’t seem to get transferred consistently from paper forms to electronic records (specficially e-Patient Dave’s requirement not to have steriods).
    • Lack of sufficient edit controls and governance over data and patient records, including audit trails.

    e-Patient Dave is at pains to make it clear that the problem isn’t with Google Health. The problem is with the data that was migrated across to Google Health from his existing electronic patient record.

    Google Health – DOA after an IQ Trainwreck.?