Meaningful metrics and proxy measures – what should they be telling us?
In my previous post, I highlighted the indicator monitoring burden on information departments. NHS National Standard Contract indicators aside, the Single Oversight Framework requires the monitoring of 30-plus organisational indicators for the typical acute trust. In addition, the NHS specifies that national requirements are supplemented by indicators agreed at a local level with commissioners.
However, the challenge comes not only from the indicator burden, but also from the missing and data not being recorded. When it comes to getting a bigger picture of what is happening in the health system, a great deal of useful data may not be recorded at all, or may be missing. For example, we know that in some areas allied health professionals working in a community setting cannot see information in a patient record that acute sector colleagues have been working on.
This means that when it comes to improving outcomes across health systems commissioners and providers are faced with an incomplete picture. It is possible to use proxy measures to determine where resources should be focussed, and this is happening in some parts of the country.
In Oldham the Thriving Community Index is being developed to reflect what is happening in over 100 neighbourhoods in the borough. Currently there are over 20 indicators in three domains: place, residents (behaviours) and reactive demand. In addition to social, environmental and demographic factors, the low-level detail also helps understand local features and nuances specific to each community. As well as these indicators, the project is also adding other indicators derived through professional shared assessment – using the knowledge and awareness of service providers to collate softer intelligence around community influencers and behaviours.
So, as well as working to reduce the indicator burden, we could also be thinking about using more meaningful metrics like the ones being used in Oldham. It is imperative we ensure our informatics professionals and clinicians are part of the debate from day one. Historically we have been able to derive indicators from existing data, we now have to shift our thinking to define the appropriate output and outcome-based indicators that will inform the data we collect. This can only happen if we get this level of engagement from the outset.
Ultimately, devolved and local models of care are steering us away from the traditional concept of national centralised data collections. So, how does this compromises our ability to benchmark – how do we know what good looks like? The challenge may have evolved, but the opportunity has never been greater to bring data and analytics to the fore.
At Dr Foster we are working hard to bring meaningful metrics into play. This will include a single patient safety indicator, an outcomes metric and an indicator that covers efficiency. In my next post I will be explaining more about our work in this area.