My interest in mortality goes back 40 years to when I trained as an actuary and carried out medical underwriting. I have written before about how early on in my NHS career I suggested applying some of the rigour we used in the insurance industry to mortality studies. I was interested in mortality rates and Hospital Standardised Mortality Ratio; it was familiar territory.
Some years later I have grave concerns about the misuse of Hospital Standardised Mortality Ratio, based in part on the academic papers often cited which raise methodological issues. I am also aware that the different methods to derive an Hospital Standardised Mortality Ratio using the same underlying data can produce different answers.
So I disliked the adoption of one flavour of Hospital Standardised Mortality Ratio as a commercial product, touted by the DH as vital to good management in its role as some kind of business partner. I disliked the use of ratings for league-table like simplistic denunciation of trusts. But I was warned (in 2008) for speaking publicly against the abuse of mortality rates.
In recent months through Mid. Staffs. to Keogh we have blatant political abuse with ludicrous claims in various media about excess deaths derived from abuse of Hospital Standardised Mortality Ratios. So, for example, the only thing we know for certain about the often quoted Hospital Standardised Mortality Ratio figures from Mid. Staffs is that they were wrong – not something you ever hear in the “media”.
None of these many misuses were ever challenged by those who knew better. The ridiculously incorrect use of inaccurate information was actually given credibility by those who knew better.
But even before that shameful episode my own work had shown me why the way Hospital Standardised Mortality Ratio is used was flawed. First it is easy to use the whole country figures to compare one year with another, and when I did that it suggested an 6- 8% improvement in mortality rates across the whole NHS in one year; clearly impossible. Second, the annual ratings tables showed there were a number of trusts that improved their Hospital Standardised Mortality Ratio significantly but other information showed both the number of actual deaths and the case mix did not change; clearly impossible. Something was wrong. Things have improved and we now have Summary Hospital-level Mortality Indicator which is a better indicator – but the issues remain.
All the indicators like Hospital Standardised Mortality Ratio and Summary Hospital-level Mortality Indicator depend on coding. I observed coding taking place and compared the process with the rigour used 40 years ago for underwriting and found it was poor. Often lightly trained clerical staff were trying to interpret incomplete information to get to the codeable diagnosis was only one of several obvious sources of potential error, but there were no serious systemic or even spot checks applied. The level of coding error and its impact on Hospital Standardised Mortality Ratio could easily be studied, but isn’t.
It is regularly claimed that high Hospital Standardised Mortality Ratio is an indicator of poor care in the organisation as a whole. And so it is claimed that a high Hospital Standardised Mortality Ratio means more deaths than should be expected given the adjusted case mix. This may even be true but no study has ever confirmed this.
If there was another universally accepted method which indisputably ranked trusts for the quality of their care then we could see how well Hospital Standardised Mortality Ratiocorrelated with the reality. Such a baseline study would not be impossible but would require some investment, but there has never been any kind of attempt to do this (still one outcome from Keogh is we are promised one). Can we honestly claim we know the underlying reasons for variations in mortality rates?
Maybe we would also find that the higher mortality rates were linked to care quality but are mostly just simply correlated with levels of funding as some claim. We don’t know.
Which leads on to something that we do know derived from proper clinically led non statistical studies into avoidable deaths. The outcome is not just a figure: it is also a lot of information about why the figure is at that level – what the most common causes of avoidable deaths were. This is something that is immediately transferable into action. Such studies are clinically intensive and do include (in some cases) actual involvement with the clinicians responsible for the care and even relatives and carers. The studies have greater reliability as there is no coding error or opportunity for gaming.
What we do know from studies of this kind is that the level of avoidable deaths is around 5-6%. The statisticians tell us that this is a figure which renders the use Hospital Standardised Mortality Ratio to detect variations in this segment of all deaths is unreliable; it can’t get at this 5-6%, which is where the variation due to clinical quality comes from, in a meaningful way.
For those that are responsible for managing clinical quality, we know there are many things which should be done. You should always do the clinical audit studies, do mortality investigations in an open and transparent environment, make sure your data is accurate, invest in analysis skills, do proper case note investigations, look not just at all deaths but also at near misses. These are all real investments in quality improvement. We could in a collaborative NHS do far more to share information and best practice.
Why bother with Hospital Standardised Mortality Ratio at all? Well, I am convinced that we have to find ways to use information better to provide early alerts of things which may be going wrong or to identify where (with perhaps the best intentions) current practice is falling behind best practice. But until we have much better data, a much more sophisticated set of analysis tools, and competent analysts we will just get the fog generated by misuse of one tool which may actually be valuable if used properly.