Author Archives: Daragh O Brien

About Daragh O Brien

Daragh O Brien is the Managing Director of Castlebridge Associates. This site has been one of his side projects for a decade. It needs some love and attention...

Bank overcharging in Ireland (again)

In a taste of the change of emphasis that is seeping through the global financial services industry, the Irish Financial Services Regulatory Authority is pursuing 24 cases of overcharging by banks and insurance companies, according to this morning’s Irish Independent

Of course, stories of financial services overcharging and other information quality disasters in that industry are not new to the IQTrainwrecks reader. Over the years we’ve covered them here, here, here, here, here , here, here, and here (to select just a few).

We’ve also covered the growing “hard touch” trend in Financial Services that is bringing a clear “cost of non-quality” to bear on banking/financial services processes (see this post from August of last year).

Why is this now an IQTrainwreck again?

  • Regulators are adopting a tougher line with banks about overcharging/undercharging (a bit like the regulators did in my former industry – telecommunications).

The new chief of the Irish Financial Services regulator is concerned about the number of overcharging cases and recently said that:

It is clear from recent cases that change is needed in how firms handle charging and pricing issues.

  • Financial services companies, facing into severe cutbacks in budgets and man power are potentially increasingly exposed to the risks of manual work arounds in processes simply stopping, end-user computing controls not being run,  and ultimately inaccuracies and errors creeping into the information they hold about the money they hold for or have loaned to customers.

As the regulatory focus shifts from ‘light touch’ to ‘velvet fist’, those financial services companies who invest in appropriate strategies for managing the quality of information in a culture of quality will be best placed to avoid regulatory penalties.

You can’t make an omlette with out breaking a few Eggs

A correspondent in the field, Nic Jefferis has sent in this story about how a “database glitch” has affected customers of the Egg on-line bank who have been trying to pay their bills using their NatWest debit cards.

The BBC describes the problem very succintly:

“The problem is that the Egg website does not recognise Natwest Visa Debit cards as being legitimate cards.”

The root cause seems to stem from the fact that key base data used by Egg’s on-line bank, the valid set of Bank Identification Numbers, appears to to not include NatWest Visa debit cards as they are only being rolled out at the moment to replace the existing Maestro Debit card facility currently in use at NatWest.

And at this point the second common component of IQTrainwrecks raises its head – who is responsible for the data.

Egg get their data from Experian. As soon as the problem arose, Egg contacted Experian to get a solution.  Natwest state that they were “aware of this problem and raised it with Egg at the outset” and were waiting for Egg to sort out the problem in their systems.

Somewhere in the process for maintaining BIN master data something has gone awry which has affected the ability of NatWest customers to pay bills using their new Visa debit cards. As the problem appears to be in the underlying base data, it is possible that there are impacts wider afield than just Egg’s payment systems.

As a source quoted in the BBC report says, this should be a straightforward process and an error like this would be highly unusual. But as we know here at IQTrainwrecks, it is often the simple errors that can have the biggest knock on impacts in downstream systems and processes resulting in loss, damage, injury, or frustration.

Why 2k?

IT media sources are reporting that reports of the demise of the Y2k bug may have been premature. (see also here)

Systems affected included Spam control software and other security software from a leading vendor, network equipment from leading vendors as well as credit card payment systems in Germany and Australia, as well as (it seems) Windows Mobile. The bug was tweeted heavily on Twitter.

The effect of this bug seems to have been to catapult messages forward in time by a few years, resulting in credit card terminals rejecting cards as they failed date validation checks (the card expiry date was in the past apparently), valid emails being flagged as spam (because the message was date stamped in the future), and SMS messages appearing to come from the future.

The potential knock-on impacts of this error don’t bear thinking about. In the immediate term we have:

  • Embarrassment for credit card wielding shoppers who found themselves unable to pay for purchases or meals.
  • Missed emails due to them being flagged incorrectly as SPAM (although this has been fixed).
  • SMS confusion.

But, in this automated world where processes are triggered by business rules based on facts and information there are potentially other impacts:

  • Discovery of emails or SMS messages in criminal or civil litigation (will the lawyers think of looking in the future? Can the evidence be verified if it appears to be from the future?)
  • electronic transfer of data or funds based on rules
  • Calculation of interest payments or penalties based on date rules

The root cause of this problem appears to have been assumptions about dates, and the thought in 1999 that 2010 was sufficiently far in the future that (one must assume) everyone assumed that a better fix for the rules being applied would be developed by then.

IAIDQ Information Quality Blog Carnival (updated)

A little later than we had planned, IQTrainwrecks.com is proud to publish the December edition of the IAIDQ’s Blog Carnival for Information Quality, a retrospective on blog posts that appeared in November.

[Edit: We’d actually missed one submission when we posted this. A horrendous oversight given the importance of the discussion. Apologies to Dylan Jones and the team at DataqualityPro.com for the boo boo]

Dylan Jones of DataQualityPro.com opens proceedings with an excellent and thought provoking debate about the nature of information quality and the role of Data Cleansing in a data driven business. The comments on this post are as interesting as the questions posed.

Then we had a short and sweet post from Dalton Cervo where he extolled the need for your information quality and data governance initiatives to be more than just a grab-bag of buzzwords but actually to be planned and executed with the understanding that each is a part in a machine that makes your business great and is capable of reacting and adapting to change.  My experience echoes Dalton’s very wise example that if the problem in in one process, the fix might need to be in that process and a number of other supporting processes.

I suppose, just like any other ‘manufacturing’ process, if the components of your machinery aren’t working in unison as they should, then the product will be defective and your machinery will eventually break.

(For those of you who don’t know Dalton, he’s  the Customer Data Quality Lead at Sun Microsystems, part of Customer Data Steward and a member of the Customer Data Governance team responsible for defining policies and procedures governing the oversight of master customer data.)

Next up is Charles Blyth. Charles is a veteran of the BI and MDM world from the business perspective. His blog post addressed the need to get Data Governance (and by extension, responsibility for information quality) back to the front-line:

Front line Data Governance is about driving data ownership back into the business, getting every resource at every data touch-point to ‘own’ the data. Get the people involved!

At this time of year when every magazine, TV station and pundit is producing their lists of things that happened in 2009 or will happen in 2010, Henrik Liliendahl Sørensen shares with us his 55 reasons to improve Data Quality. Each of these on their own is the seed for a business case (how many duplicate Christmas cards did you get from your suppliers this year?) and should form the basis of Information Quality New Year’s resolutions in companies around the world.

Jim Harris has also been busy in November with a range of posts on his own blog and on the Dataflux Community of Experts blog. At this time of year when a festive gent up living around the frozen North Pole (no… not Henrik, I’m talking about Santa Claus) is making lists and checking them twice, it is only appropriate that we’d pick Jim’s great post on Customer Incognita, where he talks about the challenges of defining the simplest fact most businesses need to know. Now… how would he handle “Naughty” and “Nice” as attribute definitions?.

Finally, Daragh O Brien shared some thoughts on how you need to keep your customer in mind when making changes in processes or technology so that you don’t wind up causing problems downstream and creating IQTrainwrecks in the process.

The look back on December will appear in January. Thanks to everyone who has written such thought provoking and stimulating posts on Information Quality in 2009. It’s hard to believe that there wasn’t this community of writers in existence this time last year.

No smoke without ire – Life Insurance Overcharging in Ireland

RTE News in Ireland ran a story last night on overcharging by Irish Life Assurance companies arising from a mis-classification of customers as smokers. (link to the item is here, but you may not be able to access it if you are not in Ireland).

On foot of two complaints, the Irish Financial Services Ombudsman investigated two companies and has identified up to 500 customers affected.  However more customers may be affected in other companies .

The two companies affected blamed “computer and administrative errors” for the misclassification and the resulting overcharging. In other words, an Information Quality problem.

The financial impact for the two customers who complained was between €1100 and €2500 on policies of different lengths. Taking a crude average value, this would suggest that for the 500 cases the Ombudsman suspects in the two companies he looked at the total cost of refunds will be in the order of €900,000.

The cost of the investigation of possible errors and the correction of records would, of course, be on top of this amount.

The Financial Services Obmudsman has asked the Irish Financial Services Regulator to conduct an industry wide audit of all Life Assurance companies to identify further instances of this kind of overcharging based on misclassification of customers. As a result, the total amount of refunds will inevitably rise, as will the cost to the industry of inaccurate information.

The news report makes no mention of the potential Data Protection issues arising here under Irish Data Protection law, which does require information to be kept accurate and up to date. But the Irish Financial Ombudsman used to be the Data Protection Commissioner, so I am sure he has flagged that to the affected institutions himself.

Fruit of a poisoned tree – Information Quality meets Data Protection

Sears, the US retailer, has been ordered to delete all customer data it obtained through the use of on-line tracking software it installed on customer’s computers.

While the programme was an opt-in programme for which customers were paid US$10, the extent of the information captured was far beyond what customers might have considered “reasonable” and included data capture that a reasonable person might class as “questionable”. The Register tells us:

The FTC said that while customers had been warned that, once downloaded, software would track their browsing, it had in fact tracked browsing on third party websites, secure browsing including banking and transactions and even some non-internet computer activity.

“The FTC charged… that the software also monitored consumers’ online secure sessions – including sessions on third parties’ Web sites – and collected consumers’ personal information transmitted in those sessions, such as the contents of shopping carts, online bank statements, drug prescription records, video rental records, library borrowing histories, and the sender, recipient, subject, and size for web-based e-mails,” said an FTC statement.

Under EU law, there are protections for individuals as regards the nature of information that can be captured and how it should be captured. These rules are encapsulated in the Data Protection regulations that apply in all EU countries.

A key part of those principles and rules is that the “data subject” (the person to whom the data relates) needs to be given a clear and upfront statement of what information is being captured about them, why, what uses it will be put to, and who it may be shared with.

The FTC specifically criticised Sears for how they presented the information on what was being captured:

“Only in a lengthy user license agreement, available to consumers at the end of a multi-step registration process, did Sears disclose the full extent of the information the software tracked,” said an FTC statement. “The [FTC] complaint charged that Sears’s failure to adequately disclose the scope of the tracking software’s data collection was deceptive and violates the FTC Act.”

So, failing to take adequate care and attention in setting and meeting your customer’s expectations about how you will use their data can seriously jeopardise your ability to capitalise on your information assets. Furthermore it can result in reputational damage and other loss. Managing that expectation improves the quality of the data you have (e.g. customers won’t input spurious data, or  you will be legally allowed to use it for other purposes) as well as meeting obligations for trust and transparency with how you manage your customer’s privacy through effective data protection.

In this case, the data gathered was fruit of a poisoned tree and Sears could not retain it or use it, negating the value of any investment they had made in the tracking programme.

Interestingly the FTC initated this case themselves, suggesting that US based Regulators may be taking data protection more seriously. Doubly interesting is the fact that the principles they are setting out are similar to EU regulations.

An Airtravel trainwreck near-miss

From today’s Irish Independent comes a story which clearly shows the impact that poor quality information can have on a process or an outcome. The tale serves to highlight the fact that information entered as part of a process can feed into other processes and result in a less than desirable outcome.

On 20th March 2009, poor quality information nearly resulted in the worst air traffic disaster in Australian history as an Airbus A340-500 narrowly avoided crashing on take off into a residential area of Melbourne. The aircraft sustained damage to its tail and also caused damage to various lights and other systems on the runway of the airport at Melbourne.

The provisional report of the Australian Air Crash investigation found that the root cause for the incident was the inputting of an incorrect calculation for the weight of the aircraft of 262 tonnes, where as the plane was actually 362 tonnes in weight. This affected the calculations for airspeed required for take-off and the necessary thrust required to reach that speed.

The end  result was that the plane failed to take off correctly and gain height as required, resulting in the tail of the plane impacting on the runway and then proceeding to plough through a lighting array and airport instruments at the end of the runway.

It is interesting, from an Information Quality perspective, to read the areas that the Accident Investigation team are looking at for further investigation (I’ve put the ones of most interest in Bold text, and the full report is available here):

  • human performance and organisational risk controls, including:
    • data entry
    • a review of similar accidents and incidents
    • organisational risk controls
    • systems and processes relating to performance calculations
  • computer-based flight performance planning, including:
    • the effectiveness of the human interface of computer based planning tools.
  • reduced power takeoffs, including:
    • the risks associated with reduced power takeoffs and how they are  managed
    • crew ability to reconcile aircraft performance with required takeoff performance, and the associated decision making of the flight crew
    • preventative methods, especially technological advancements.

The Report by the Australian authorities also contains reference to some of the migitations that the aircraft operator was considering to help prevent a recurrence of this risk:

  • • human factors – including review of current pre-departure, runway performance calculation and cross-check procedures; to determine if additional enhancement is feasible and desirable, with particular regard to error tolerance and human factors issues.
  • training – including review of the initial and recurrent training in relation to mixed fleet flying and human factors.
  • fleet technical and procedures – including introduction of a performance calculation and verification system which will protect against single data source entry error by allowing at least two independent calculations.
  • hardware and software technology – including liaising with technology providers regarding systems for detecting abnormal take-off performance.

For those of us familiar with Information Quality practices, this is an impressive haul of information quality management improvement actions focussed on ensuring that this type of near-miss never happens again. It is doubly interesting that causes of poor quality information feature in the items that are subject to further investigation (e.g. “human factors”, risk controls etc.) and common approaches to resolution or prevention of information quality problems form 75% of the action plan put forward by the operator (process enhancement, improved checking of accuracy/validity, assuring consistency with other facts or measures etc).

Lost in Translation

Not a trainwreck in the strict sense of the word or on the scale of other cases we’ve logged recently, but this story from the Irish Examiner does illustrate the importance of language and terminology in communicating important information. Breakdowns in the transfer of important information can often cause distress and a failure to meet expectations.

It seems that on an Aer Lingus flight to the US the crew were warning passengers of turbulence and advising them to return to their seats. Unfortunately, the wrong message got relayed in French, resulting in Francophones aboard the flight fearing for their lives as the message they heard was that they were to prepare for an emergency landing, which over the Atlantic could only mean ditching in the ocean.

Thankfully this was not the case, but there were a few worried minutes until cabin crew realised the error, apologised to passengers and calmed everyone down.

Similar miscommunications happen all the time in business and IT where there are subtle difference in the meaning of words used in niche disciplines. For example, to a Marketing person an SME is a Small to Medium Business, where as IT know them as a “Subject Matter Expert”.

Did you check on the cheques we sent to County Jail?

Courtesy of Keith Underdown comes yet another classic IQ Trainwreck  which he came across on the CBS News.

It seems that up to 3900 prisoners received cheques (or ‘checks’ to our North American readers) of US$250 each, despite the very low probability that they would be able to actually use them to stimulate the economy. Of the 3900, 2200 were, it seems, entitled to receive them as they had not been incarcerated in any one of the three months prior to the enactment of the Stimulus bill.

However, that still leaves 1700 prisoners who should not have received cheques who did. The root cause?

According to CBS News:

…government records didn’t accurately show they were in prison

A classic information quality problem… accuracy of master data being used in a process resulting in an unexpected or undesired outcome.

While most prisons have intercepted and returned the cheques, there will now need to be a process to identify,  for each prisoner, whether the Recovery payment was actually due. Again, a necessary manual check (no pun intended) at this stage but one which will add to the cost and time involved in processing the Recovery cheques.

Of course, we’ve already written here about the problem with Stimulus cheques being sent to deceased people.

These cases highlight the fact that an Information Quality problem doesn’t have to be massively impacting on your bottom line or impact significant numbers of people to have an impact on your reputation.

US Government Health (S)Care.

Courtesy of Jim Harris at the excellent OCDQBlog.com comes this classic example of a real life Information Quality Trainwreck concerning US Healthcare. Keith Underdown also sent us the link to the story on USAToday’s site

It seems that 1800 US military veterans have recently been sent letters informing them that they have the degenerative neurological disease ALS (a condition similar to that which physicist Stephen Hawking has).

At least some of the letters, it turns out, were sent in error.

[From the LA Times]

As a result of the panic the letters caused, the agency plans to create a more rigorous screening process for its notification letters and is offering to reimburse veterans for medical expenses incurred as a result of the letters.

“That’s the least they can do,” said former Air Force reservist Gale Reid in Montgomery, Ala. She racked up more than $3,000 in bills for medical tests last week to get a second opinion. Her civilian doctor concluded she did not have ALS, also known as Lou Gehrig’s disease.

So, poor quality information entered a process, resulting in incorrect decisions, distressing communications, and additional costs to individuals and governement agencies. Yes. This is ticking all the boxes to be an IQ Trainwreck.

The LA Times reports that the Department of Veterans Affairs estimates that 600 letters were sent to people who did not have ALS. That is a 33% error rate. The cause of the error? According to the USA Today story:

Jim Bunker, president of the National Gulf War Resource Center, said VA officials told him the letters dated Aug. 12 were the result of a computer coding error that mistakenly labeled the veterans with amyotrophic lateral sclerosis, or ALS.

Oh. A coding error on medical data. We have never seen that before on IQTrainwrecks.com in relation to private health insurer/HMO data. Gosh no.

Given the impact that a diagnosis of an illness which kills affected people within an average of 5 years can have on people, the simple coding error has been bumped up to a classic IQTrainwreck.

There are actually two Information quality issues at play here however which illustrate one of the common problems in convincing people that there is an information quality problem in the first place . While the VA now estimates (and I put that in bold for a reason) that the error rate was 600 out of 1800, the LA Times reporting tells us that:

… the VA has increased its estimate on the number of veterans who received the letters in error. Earlier this week, it refuted a Gulf War veterans group’s estimate of 1,200, saying the agency had been contacted by fewer than 10 veterans who had been wrongly notified.

So, the range estimates for error goes from 10 in1800 (1.8%) to 600 in 1800 (33%) to 1200 in 1800 (66%). The intersting thing for me as an information quality practitioner is that the VA’s initial estimate was based on the numberof people who had contacted the agency.

This is an important lesson.. the number of reported errors (anecdotes) may be less than the number of actual errors and the only real way to know is to examine the quality of the data and look for evidence of errors and inconsistency so you can Act on Fact.

The positive news… the VA is changing its procedures. The bad news about that… it looks like they are investing money in inspecting defects out of the process rather than making sure the correct fact is correctly coded in patient records.