NHS performance management putting standards of care at risk

Efforts to improve standards of patient care in the NHS are being undermined by performance measures that encourage ‘gaming’ and sap professional motivation, according to our report Uses and Abuses of Performance Data in Healthcare. It makes a series of recommendations to tackle practices that distort the reliability of the information used to manage the standards of care delivered to patients.

A wide range of hospital data is now routinely made available to the public via resources such as the NHS Choices website, Care Quality Commission reports and Dr Foster’s own Hospital Guide, published since 2000. Much of this data ultimately derives from hospitals’ own administrative and clinical records.

The report outlines how better healthcare data from English hospitals has, overall, led to greater transparency. This has helped identify serious failings in hospitals, such as the scandal of poor care at Mid Staffordshire NHS Trust which led to hundreds of unnecessary deaths.

Data collection has also helped to improve patient care, with the report citing the example of Leicester, which in 2012 was shown to have 53% more excess deaths from cardiovascular disease than the English average. Since then, greater access to drugs which prevent blood clots has been rolled out, expected to result in a 65% reduction in strokes.

But the report also warns that an obsession with targets in the NHS has encouraged practices designed to inflate achievements and mask problems. These range from ‘tunnel vision’ – excessive focus on only those aspects of clinical performance that are measured – through to ‘gaming’ – the willful manipulation of data to make performance appear better than reality.

It highlights a number of examples of malpractice including:

  • Two independent audits of NHS waiting lists, in 2003 and 2014, show clear evidence of ‘gaming’ of waiting time data, including deliberate misrecording in some trusts.
  • The huge variations between English NHS hospitals in the prevalence of patients coded as receiving palliative (end-of-life) care and a continued drift upwards over time. Since palliative care patients are expected to die, this can make actual recorded deaths in hospital appear lower than expected.
  • Pressure to meet the 4-hour waiting time target for Accident and Emergency has seen patients being held in ambulances outside hospitals to delay the ‘clock starting’; rooms and even corridors being designated as acute observation units so that patients can be categorised as having left A&E; and patients being admitted at the 4-hour point to avoid breaches of the target.

The report makes five key recommendations for tackling these problems:

  • Make data quality as important as hitting targets – By initiating a long term audit programme to tackle misreporting and incomplete or inaccurate data recording.
  • Measure the context not just the indicator – Keeping performance measures under constant review, perhaps by multi-disciplinary specialist groups, including Royal Colleges and patient organisations.
  • Avoid thresholds and consider the potential to incentivise ‘gaming’ in design of metrics – Performance measures should be assessed according to the likelihood they will encourage abuse. Thresholds should be avoided wherever possible.
  • Be more open – Making data underlying performance management widely available and promoting ongoing assessment of the degree to which metrics are being gamed.
  • Apply measures fairly – In order to recognise legitimate mitigating factors such as resources and pressures outside the control of the organisation.

Joanne Shaw, the report’s lead author, said:

“In a democratic society which aspires to improve the health and care of its population, measurement and transparency in healthcare cannot and should not go away.  Quite the reverse – we need more of it, not less. But we also recognise the complications that come from human nature, and the inevitable temptation to want to make things seem better than they really are.”

Dr Foster’s Roger Taylor, the report’s co-author, added:

“Good data can spotlight excellent practice and illuminate dark corners where things are going wrong.

“Conversely, measurement, target setting and publication of results can become oppressive, activity can be distorted to produce more acceptable numbers, and arguments about data validity can distract attention from real issues, diverting scarce resources from much-needed improvement.

“The challenge is to use performance data to provide accountability and stimulate improvement, without leading to adverse effects which swamp the intended gains.”

Read these next