Thursday, 7 January 2016

Spotting other people's mistakes - and saving lives in the NHS

How good are you at spotting mistakes? You may not notice your own but you're probably good at spotting other people's.

In jobs where mistakes can cause harm, it's vital to find ways of avoiding them or lessening their impact. This is what I spent last summer researching for my MSc in human computer interaction (HCI).

Accidental overdoses

The workplaces I looked at were NHS hospitals and people's mistakes were accidental drug overdoses. Specifically, accidental overdoses where staff were using machines (infusion devices - see pictures) to give patients a steady dose of drugs, blood, hormones or food over a period of time. It could be over 20 minutes or 12 hours.

I also looked at 'underdoses' which can be equally harmful: imagine you need a steady dose of insulin, liquid food or painkillers, and you don't receive it.

Sharp end, blunt end

I say 'mistakes' but in HCI you learn the user's never to blame. When mistakes happen, it's down to bad equipment design or a perfect storm of events.

James Reason, an expert in human error, distinguishes between the 'sharp end' - frontline staff who come into contact with patients and are often blamed for errors - and the 'blunt end', senior managers and policies that create conditions for mistakes to happen. Sharp end and blunt end are mentioned in Reason's 1995 article Understanding Adverse Events. 

7 years of incidents

I was lucky enough to get my hands on real NHS data - 7 years' worth of incidents from NHS hospitals and care homes that my supervisor, UCL's Ann Blandford, was guarding closely. Ann gave me a password-protected data stick with an Excel spreadsheet of 8,000 incident reports on infusion devices. The reports were written by medical staff, from healthcare assistants to nurses to anaesthetists.

I couldn't read all 8,000 so Ann showed me how to sample the reports systematically to avoid bias and ensure my study was as objective as could be.

We are detectives

In June 2015 I started reading reports and dived into a world of busy hospital wards, formal procedures, bleeping medical equipment and stressed staff. It felt like playing detective: What happened here? Was the patient OK? Who did what? Is this person trying to point the finger of blame at a colleague?

I read, re-read and made notes, waiting for ideas to leap out at me. This method's called 'grounded theory', described by professor Kathy Charmaz in her helpful book Constructing Grounded Theory.

Many of the incident reports were disappointingly brief, missing out vital details. I had to be careful not to jump to conclusions and see things that weren't actually there. My supervisor kept me on track, constantly asking for evidence to back up my hunches.

Clock on, spot error

Pretty soon, a pattern emerged. Nurses were coming on to their shift and noticing machines had been programmed with the wrong dose of pain relief or antibiotics. Nurses were noticing the previous shift's 'programming errors'. It's what James Reason calls unsurprisingly "fresh eyes".

Three-Mile Island

In his 1990 book Human Error, Reason cites the 1979 accident at the Three Mile Island nuclear power plant in the US, where the fresh eyes of a supervisor on an incoming shift diagnosed the problem after colleagues on the previous shift were unable to diagnose it correctly.

Tip of the iceberg

To find more evidence for the incoming shift spotting the previous shift's errors, I sampled around 400 reports. One challenge was that many reports were irrelevant: issues such as a lack of available equipment, dirty machines, broken machines.

But I found enough reports to provide evidence for my theory: nurses spotting errors while carrying out routine checks as part of normal duties or checking patients off their own bat. And when you think that hospital incidents are vastly underreported (Billings, 1998, 'Incident reporting systems in medicine') then this could be the tip of the iceberg.

Design recommendations

So if nurses are good at spotting each other's errors when they walk round the wards, why not encourage them to do it more often? Why not increase staffing levels so that nurses can make ward rounds every hour or so? In the context of today's NHS trust budget deficits, this suggestion would probably not go down well.

Another suggestion would be to make it easier to spot errors by making information more 'in your face'. Nurses diagnose errors by comparing a patient's prescription (on a chart or in notes) with the electronic display on an infusion device. Could you make these 2 things more obvious so any mismatch stands out?

I know that spotting programming errors - noticing an ongoing drug overdose and fixing it - is obviously not as good as preventing an error happening in the first place. But to prevent errors happening, you need to understand why they happen - and that's something you can't tell from these brief incident reports, you'd need to be there on the wards shadowing staff and interviewing them.

Robot drug dispensers

If the NHS had the time, money and good project management, it could automate the dispensing of drugs and get all medical systems talking to each other - patient's notes, prescription, hospital pharmacy, barcode on medicine, infusion device - so that, in theory there would be less room for error. It's not foolproof - automation might turn out to have a bad knock-on effect - but it might be worth a try.

Distilled dissertation

So this is my 16,000-word dissertation - a 3-month fascinating investigation last summer - distilled down. I've written this in as normal English as I could muster after a year of using HCI jargon.

I've missed out many stages of the research - it wasn't as quick and easy as I've made out. So here's the full dissertation: The detection of errors in infusion rates on infusion devices: an analysis of incident reports from the National Reporting and Learning System (NRLS).

You spent a whole 3 months proving that?

And you may be thinking: "This 'fresh eyes' theory and nurses spotting each other's errors - it's all common sense isn't it? I could have told you that and saved you 3 months' work." Well, you're kind of right! But HCI research is often about proving one tiny thing everybody takes for granted but no one has actually proved. So that's why I did it.

Saturday, 5 December 2015

Speedy boarding on the Tube: stand clear of the doors

Ever jumped through the Tube doors as they're closing? Yeah, me too. We're all at it. "In London we tend to think of the doors closing as a challenge not a threat," said Nick Tyler, professor of civil engineering at UCL.

Tube doors cropped up in The Art of Boarding and Alighting, at the Institution of Mechanical Engineers. Nick's helping Transport for London (TfL) work out how to get people on and off trains quicker.

Faster? No!

We commuters already move as quickly as we humanly can. "It's very hard to get people to go any faster. People move at a certain speed and it's very difficult to change," said Nick. So TfL is turning to engineers and human factors experts to supercharge us.

Wider doors

You could make Tube doors wider. You could redesign Tube carriages getting rid of the end-of-carriage single doors and make all doors double. But it takes time to redesign.

Pillars and zips

Another trick for getting crowds of people to move forward efficiently is, believe it or not, to put a pillar in front of them. "A pillar acts like a zip, it forces the crowd to split to go round it. In Oslo trains have a central pillar," explained Nick.

But again, that would be a redesign. And pillars take up space.

What's the frequency, Kenneth?

But in the meantime, there's an obvious solution: increase the frequency of Tube trains and that increases the number of people you can ferry around. "You could have the next train coming as the current one leaves," said Nick.

To increase frequency, you need to reduce 'dwell time' in stations.

Automatic (for the people) doors

And to reduce dwell time, said Nick, TfL probably needs to change the way the doors operate. At the moment Tube drivers decide when to close them. And we, the passengers are constantly rushing on at the last minute and holding up the train.

27 seconds

It may be more efficient to have Tube doors that close automatically after a certain amount of time. Automatic doors can help you regulate frequency, run a consistent service.

For Thameslink, Nick's team built a full-size mock-up of a train and paid people to act as rush-hour passengers. The experiments found the most efficient amount of time to get enough people off and on the Thameslink train was 27 seconds.

Any longer and Thameslink wouldn't be able to run frequent enough trains for the number of passengers it expects to carry. Nick emphasised that 27 seconds was chosen for Thameslink - but the most efficient dwell time depends on the design of train.

Stand clear of the doors

But the big question is how you stop Londoners like us from charging onto the train just as the automatic doors are closing? One way of changing our behaviour may be to give us more information about the next train: "The next train's one minute away and the back 4 carriages are far less crowded than this current train".

And that information needs to reach us at the place we need it - either audio from the tannoy or by projecting a message onto the space we're gazing at when trying to pile on a train.

And to help us get off the train more efficiently, we need information about the approaching station: "Doors will open on the left. The platform's crowded so ...."

Mind the gap

Boarding and alighting times are not the only thing on Nick's plate. Other issues include helping us mind the gap.

Ironically a small gap can be worse than a big one, as we're less likely to notice it. A future project would be to design a shelf to descend from the train doors to match the level of the platform. An extra engineering challenge is London's Victorian curved Tube platforms that make for a quirky platform-train interface.

Tiny tip: how to simulate rush hour

When simulating rush hour, you can motivate participants to charge onto a prototype train by paying them extra if they manage to get a seat, admitted an industry insider. Rather like musical chairs.

Friday, 13 November 2015

Gender-inclusive software: learnable by women and men

toy figures holding suitcases
Is your software 'gender inclusive' - is your system equally learnable and usable by men and women? This was last night's talk at City University's Centre for Human Computer Interaction Design, for world usability day.

The question arises as, even in 2015, many of those who develop software systems are male, while users are female and male - and may have different learning styles.

Oregon State University's Margaret Burnett has researched the way men and women learn and use software packages such as spreadsheets or programming tools.

Margaret's noticed some differences and distilled them into 5 characteristics:

  • self-efficacy - how confident people are in their tech ability (females had lower self-efficacy)
  • attitude towards risk - some people don't bother to learn new features if they think they won't use them (more females)
  • willingness to explore - whether you learn by tinkering (more males)
  • motivations for use - liking new technology for its own sake (more males)
  • information processing - whether you gather upfront all information before you solve a problem (females), or dip in and out, while solving (males) 

Design implications

If you know some people like all information upfront, you could provide expandable content, suggests Margaret. And if you know half your audience doesn't like tinkering then you could offer step-by-step tutorials

Toolkit for UXers and developers

The 5 characteristics have fed into a toolkit, Gender Mag, currently in beta ('Mag' stands for magnifier). The kit consists of personas (Abby, Pat, Patricia, Tim) to use in a cognitive walkthrough, asking questions like "Will Abby notice the correct action for the goal she's trying to achieve? And if not, why not?" - all the time referring to Abby's characteristics.

The toolkit's been used successfully to highlight usability issues on software development projects in healthcare and government.

See the full Gender Mag talk on YouTube.