Despite being told by officials and academics that generalisations from mortality rates are not sensible our media are at it again, using the information for blatantly party political purposes.
Mortality rates like HSMR and SHMI are about as useful as all other similar headline measures which attempt to sum up the quality of something as complex as a hospital. Despite years of use and abuse what we know about WHY some hospitals have higher rates is very little. At best we know some are better than others because they have more skills and resources and trusts with higher levels of staff satisfaction provide better care – not really a surprise.
Saying hospitals with higher mortality rates are needlessly killing people is as stupid as saying those with lower rates are heroically savings lives – although all hospitals do indeed save lives.
Here are some facts about mortality measures.
- The only thing we know about the much discussed published mortality rates at Mid Staffs is that they were wrong – many suggest that if the data was now correctly analysed the rate would have been below average.
- They depend on “coding” carried out within the trust and it is well established that this gives rise to significant errors as well as possible “gaming”. Even if “coding” were perfect the methods still cannot adjust for all possible causes of variation that are nothing to do with care quality.
- According to one leading brand mortality across the whole NHS improved by 8% in a year – nobody believes this.
- When mortality rates became of great interest a number of trusts dramatically improved their rate – some by up to 30%, but there was no noticeable change in the number of actual deaths.
- Looking at the most recent data from the two leading mortality measures shows a couple of trusts are significantly above average (scary dangerous) on one measure and significantly below average ( brilliant exemplary) on the other.
- In a controlled experiment, exactly the same data on a sample of hospitals was analysed by four organisations using different methods and they got totally different answers.
- The trust of the year according to the leading “mortality” analysis vendor is in special measures with the regulators and in breach of its terms of authorisation.
- Two trusts castigated for poor mortality one year got praise the next for dramatic improvements and two years later were back on the naughty chair and under investigation.
- Such studies as have been carried out show no direct correlation between poor care as measured in some way which is not contested (such as case notes review) directly and mortality ratios.
Rates of unnecessary deaths can be defined and measured and are not subject to statistical manipulation and show a rate of around 6% across the NHS and this is incompatible with the claims made about the relevance of crude mortality rates.
By the way – in any year there will always be 14 trusts in the worst 14 category and whilst you have 150 plus trusts there will always be trusts whose mortality rates are above average and some will be significantly above average.
Using crude measures as a signal to trigger investigation is possible but as HCC, CQC and others found using this kind of data to predict where to look for poor care was inadequate and could be misleading; only pervasive on site competent in-depth investigation is of any real value. Its time the Royal Colleges, NHS Confederation and all the other bodies spoke out and stopped the nonsense.