Labour’s 10-year plan, published in July (seriously weakened by a lack of funding and an effective implementation strategy), promised to publish “easy-to-understand league tables, starting this summer, that rank providers against key quality indicators”.
This seems to be the answer to a question that is almost never asked by patients (who for the most part just want their local hospital to be properly staffed and funded, and delivering safe, effective and prompt treatment).
It is a part of Labour’s shift back towards notions of “patient choice,” competition and a ‘market’ in health care incorporating private providers.
These same ideas were first eagerly embraced under Tony Blair’s New Labour government from 2000, and the Health Secretary who took it up most eagerly was Alan Milburn. Since stepping down as health secretary to spend more time with the private sector, Milburn has made millions, and has now been brought by Wes Streeting back in to the Department of Health, where he seems to have pretty much taken charge.
Hence the 10 year plan stuffed with discredited New Labour policies that failed to deliver first time around, even in the decade when the NHS was growing faster than ever before or since through substantial above-inflation increases in funding.
Competition
To encourage “patient choice”, and create real “competition” between hospitals, patients have to be offered some information that might encourage them to consider travelling out of their local area for treatment.
Back in the 2000s Milburn’s version was “star ratings,” through which ministers hoped patients would be so eager to get to the highest-rated hospital they would be willing to make longer journeys, and create the competitive market which (despite all the evidence) Milburn and others believed would somehow drive up the quality of elective care and reduce costs.
The star ratings system for NHS hospitals in England was introduced in 2001, and ranked every NHS trust (including acute hospitals, specialist trusts, and others) on a scale from zero to three stars, based on up to 45 performance indicators, covering areas such as waiting times, cancelled operations, cleanliness, staff morale, financial management, and clinical outcomes like mortality rates. To achieve foundation trust status, the top three-star rating was required.
The system followed the classic perverted logic: it aimed to “name and shame” underperformers, and threaten chief executives could lose their jobs … while rewarding high achievers. While the elite three-star trusts were feted with extra funding and greater autonomy, failing trusts were left to struggle for survival.
By 2005, the final year of the system, 73 trusts gained three stars, 53 got two stars, 38 received one, – and nine had zero stars. The star ratings were (rather implausibly) credited with driving tangible gains, such as reducing average hospital waiting times from 18 months in 2000 to 18 weeks by 2008 – years in which levels of funding were transformed to historically high levels, allowing capacity to increase – and better A&E performance (even though A&E was one area where patients with the most serious needs have never had any choice). Analysis in 2006 suggested the ratings had led to various ways of ‘gaming’ the system.
The star ratings faced also significant criticism for being too simplistic. Reducing complex hospital systems to a single score ignored factors like the increased patient complexity in teaching hospitals, the physical age and condition of the buildings, and regional variations. The star ratings also made it harder to recruit staff and senior managers to zero- and single-starred trusts, demoralised staff working in conditions over which they had no control, and discouraged patients from trusting their local, lower-rated services.
Unintended consequences
The new quarterly league table system covering all 205 NHS trusts uses fewer (30) metrics across clinical and patient experience areas as well as staff engagement, waiting times and infection rates — but has brought a fresh debate on the extent to which it repeats past flaws.
Some of these are well explained in a ‘long-read’ analysis for the NHS Confederation, which represents commissioners and providers. It warns that league tables can result in unintended consequences, and can be misleading, especially for the wider public who will be unaware of many of the complexities glossed over by the apparently simple scoring system.
The article points out that patient priorities are very different from the priorities of NHS England and the government seeking to balance the financial books:
“While financial management is an important part of organisational oversight, it is unlikely to be a key concern for patients choosing where to receive care. Instead, they want to know which hospital has the best health outcomes, patient experience and the shortest waits. This raises a concern: will people understand that a hospital ranked lower in the league table might still provide high-quality, safe care?”
The Confed article also warns that research by the King’s Fund shows that, in practice, most patients choose their local provider – even when given the option to travel elsewhere: “Notably, those from the most deprived socioeconomic areas are least likely to exercise choice when it was available.”
As a result, increasing patient awareness of choice “did little to focus providers on improvement because so few patients exercise it.” So league tables on their own are “ill-suited to guide patient choice.”
Along with other critics of the new scheme they also warn of the distorting impact of the “financial override”, through which a hospital trust that may be delivering high quality care but running a deficit is limited to rise no higher than segment three of four – whatever the circumstances that may have led to the deficit.
Unintended consequences are another problem with league tables: examples include the obvious, such as seeking to improve performance without investment by placing even more impossible workload and strain on staff, even resorting to bullying: this can result in increased sickness absence and growing numbers of unfilled vacancies.
Other unintended consequences include ‘measurement fixation,’ in which trusts focus only in what is being measured, allowing other aspects of care to suffer, obsessive focus on financial balance, which was a key factor driving the cuts in staff and catastrophic drop in quality of care at Mid Staffordshire Hospitals in the mid-2000s which became a national scandal.
Measuring “distance from plan” can lead to trusts submitting less ambitious plans, tying executive pay to performance can also lead to more limited targets, and any league tables that seek to penalise or remove senior management in failing trusts risk making it impossible to recruit to fill these senior management posts – and perpetuating poor performance, while the top performers attract all of the best managers.
By contrast the DHSC press release announcing the new league tables confirms that ministers have failed to understand any of these issues. From the outset it is assumed that league tables automatically drive improvement:
“This is not just about data, it’s about delivery. The public expect results from the record funding going into the NHS, and this reform ensures that investment is matched by improvement. That is why top-performing trusts will be rewarded with greater autonomy, including the ability to reinvest surplus budgets into frontline improvements such as new diagnostic equipment and hospital upgrades.”
Indeed the trusts that are already performing best are to be given even more, while those that are struggling face the threat of various levels of intervention by NHS England, as set out in the NHS enforcement guidance.
Segments
The first league tables show just 16 acute trusts in the top segment, only 11 in segment 2, but 76 in segment 3 and 31 have been relegated to the bottom Segment 4. All but one of the trusts in segment 4 (United Hospitals of Lincolnshire) are in financial deficit: but ten of the 31 have chronic financial problems stemming from disastrous PFI schemes (hospitals built since 1997 under the Private Finance Initiative, which inflated costs but often also reduced the numbers of beds), and four more, including the bottom two, are collapsing with defective RAAC concrete, and awaiting rebuilds.
What is more confusing is that the way the rankings have been calculated means that many of them have been placed part-way along a range of possible positions: Salisbury Foundation Trust and Barking, Havering and Redbridge University Hospitals (BHRUT), for example, are in Segment 3, ranked at equal on 57th, but Salisbury could have been anywhere from 33rd to 110th, and BHRUT between 32nd and 110th.
In Segment 1, the Royal Orthopaedic Hospital is placed 14th, but could have come anywhere between 3rd and 96th: Guy’s and St Thomas’s (15th) could have been anywhere between 5th and 76th: in other words both could have been in segment 3. North Bristol (24) in segment 2 could have been placed between 11 and 107.
The sheer complexity of the calculations (and the explanations of the calculations) along with the wide possible variation in final ranking must mean the league tables are hopeless in practice for the occasional punter wondering whether there is a safer hospital within an affordable distance.
But NHS England appears blissfully unaware of the way any of this will look to a member of the public, and promises:
“To support transparency, NHS England will publish a dashboard to give the public access to the data that underpins segmentation.
“Each provider’s domain scores, organisational delivery score and final segmentation will be shown alongside the individual metric scores used to calculate them and the results of their contextual metrics.”
So that’s alright, then.
However with the planned wave of redundancies and job losses put on indefinite hold, financial problems growing in every trust, and winter coming, it’s a fair guess that the last thing local trust bosses needed was grief over such a dodgy set of league tables, and the brain-ache required to draw any useful conclusions from them.
Dear Reader,
If you like our content please support our campaigning journalism to protect health care for all.
Our goal is to inform people, hold our politicians to account and help to build change through evidence based ideas.
Everyone should have access to comprehensive healthcare, but our NHS needs support. You can help us to continue to counter bad policy, battle neglect of the NHS and correct dangerous mis-infomation.
Supporters of the NHS are crucial in sustaining our health service and with your help we will be able to engage more people in securing its future.
Please donate to help support our campaigning NHS research and journalism.
Labour’s 10-year plan, published in July (seriously weakened by a lack of funding and an effective implementation strategy), promised to publish “easy-to-understand league tables, starting this summer, that rank providers against key quality indicators”.
This seems to be the answer to a question that is almost never asked by patients (who for the most part just want their local hospital to be properly staffed and funded, and delivering safe, effective and prompt treatment).
It is a part of Labour’s shift back towards notions of “patient choice,” competition and a ‘market’ in health care incorporating private providers.
These same ideas were first eagerly embraced under Tony Blair’s New Labour government from 2000, and the Health Secretary who took it up most eagerly was Alan Milburn. Since stepping down as health secretary to spend more time with the private sector, Milburn has made millions, and has now been brought by Wes Streeting back in to the Department of Health, where he seems to have pretty much taken charge.
Hence the 10 year plan stuffed with discredited New Labour policies that failed to deliver first time around, even in the decade when the NHS was growing faster than ever before or since through substantial above-inflation increases in funding.
Competition
To encourage “patient choice”, and create real “competition” between hospitals, patients have to be offered some information that might encourage them to consider travelling out of their local area for treatment.
Back in the 2000s Milburn’s version was “star ratings,” through which ministers hoped patients would be so eager to get to the highest-rated hospital they would be willing to make longer journeys, and create the competitive market which (despite all the evidence) Milburn and others believed would somehow drive up the quality of elective care and reduce costs.
The star ratings system for NHS hospitals in England was introduced in 2001, and ranked every NHS trust (including acute hospitals, specialist trusts, and others) on a scale from zero to three stars, based on up to 45 performance indicators, covering areas such as waiting times, cancelled operations, cleanliness, staff morale, financial management, and clinical outcomes like mortality rates. To achieve foundation trust status, the top three-star rating was required.
The system followed the classic perverted logic: it aimed to “name and shame” underperformers, and threaten chief executives could lose their jobs … while rewarding high achievers. While the elite three-star trusts were feted with extra funding and greater autonomy, failing trusts were left to struggle for survival.
By 2005, the final year of the system, 73 trusts gained three stars, 53 got two stars, 38 received one, – and nine had zero stars. The star ratings were (rather implausibly) credited with driving tangible gains, such as reducing average hospital waiting times from 18 months in 2000 to 18 weeks by 2008 – years in which levels of funding were transformed to historically high levels, allowing capacity to increase – and better A&E performance (even though A&E was one area where patients with the most serious needs have never had any choice). Analysis in 2006 suggested the ratings had led to various ways of ‘gaming’ the system.
The star ratings faced also significant criticism for being too simplistic. Reducing complex hospital systems to a single score ignored factors like the increased patient complexity in teaching hospitals, the physical age and condition of the buildings, and regional variations. The star ratings also made it harder to recruit staff and senior managers to zero- and single-starred trusts, demoralised staff working in conditions over which they had no control, and discouraged patients from trusting their local, lower-rated services.
Unintended consequences
The new quarterly league table system covering all 205 NHS trusts uses fewer (30) metrics across clinical and patient experience areas as well as staff engagement, waiting times and infection rates — but has brought a fresh debate on the extent to which it repeats past flaws.
Some of these are well explained in a ‘long-read’ analysis for the NHS Confederation, which represents commissioners and providers. It warns that league tables can result in unintended consequences, and can be misleading, especially for the wider public who will be unaware of many of the complexities glossed over by the apparently simple scoring system.
The article points out that patient priorities are very different from the priorities of NHS England and the government seeking to balance the financial books:
The Confed article also warns that research by the King’s Fund shows that, in practice, most patients choose their local provider – even when given the option to travel elsewhere: “Notably, those from the most deprived socioeconomic areas are least likely to exercise choice when it was available.”
As a result, increasing patient awareness of choice “did little to focus providers on improvement because so few patients exercise it.” So league tables on their own are “ill-suited to guide patient choice.”
Along with other critics of the new scheme they also warn of the distorting impact of the “financial override”, through which a hospital trust that may be delivering high quality care but running a deficit is limited to rise no higher than segment three of four – whatever the circumstances that may have led to the deficit.
Unintended consequences are another problem with league tables: examples include the obvious, such as seeking to improve performance without investment by placing even more impossible workload and strain on staff, even resorting to bullying: this can result in increased sickness absence and growing numbers of unfilled vacancies.
Other unintended consequences include ‘measurement fixation,’ in which trusts focus only in what is being measured, allowing other aspects of care to suffer, obsessive focus on financial balance, which was a key factor driving the cuts in staff and catastrophic drop in quality of care at Mid Staffordshire Hospitals in the mid-2000s which became a national scandal.
Measuring “distance from plan” can lead to trusts submitting less ambitious plans, tying executive pay to performance can also lead to more limited targets, and any league tables that seek to penalise or remove senior management in failing trusts risk making it impossible to recruit to fill these senior management posts – and perpetuating poor performance, while the top performers attract all of the best managers.
By contrast the DHSC press release announcing the new league tables confirms that ministers have failed to understand any of these issues. From the outset it is assumed that league tables automatically drive improvement:
Indeed the trusts that are already performing best are to be given even more, while those that are struggling face the threat of various levels of intervention by NHS England, as set out in the NHS enforcement guidance.
Segments
The first league tables show just 16 acute trusts in the top segment, only 11 in segment 2, but 76 in segment 3 and 31 have been relegated to the bottom Segment 4. All but one of the trusts in segment 4 (United Hospitals of Lincolnshire) are in financial deficit: but ten of the 31 have chronic financial problems stemming from disastrous PFI schemes (hospitals built since 1997 under the Private Finance Initiative, which inflated costs but often also reduced the numbers of beds), and four more, including the bottom two, are collapsing with defective RAAC concrete, and awaiting rebuilds.
What is more confusing is that the way the rankings have been calculated means that many of them have been placed part-way along a range of possible positions: Salisbury Foundation Trust and Barking, Havering and Redbridge University Hospitals (BHRUT), for example, are in Segment 3, ranked at equal on 57th, but Salisbury could have been anywhere from 33rd to 110th, and BHRUT between 32nd and 110th.
In Segment 1, the Royal Orthopaedic Hospital is placed 14th, but could have come anywhere between 3rd and 96th: Guy’s and St Thomas’s (15th) could have been anywhere between 5th and 76th: in other words both could have been in segment 3. North Bristol (24) in segment 2 could have been placed between 11 and 107.
The sheer complexity of the calculations (and the explanations of the calculations) along with the wide possible variation in final ranking must mean the league tables are hopeless in practice for the occasional punter wondering whether there is a safer hospital within an affordable distance.
But NHS England appears blissfully unaware of the way any of this will look to a member of the public, and promises:
“To support transparency, NHS England will publish a dashboard to give the public access to the data that underpins segmentation.
“Each provider’s domain scores, organisational delivery score and final segmentation will be shown alongside the individual metric scores used to calculate them and the results of their contextual metrics.”
So that’s alright, then.
However with the planned wave of redundancies and job losses put on indefinite hold, financial problems growing in every trust, and winter coming, it’s a fair guess that the last thing local trust bosses needed was grief over such a dodgy set of league tables, and the brain-ache required to draw any useful conclusions from them.
Dear Reader,
If you like our content please support our campaigning journalism to protect health care for all.
Our goal is to inform people, hold our politicians to account and help to build change through evidence based ideas.
Everyone should have access to comprehensive healthcare, but our NHS needs support. You can help us to continue to counter bad policy, battle neglect of the NHS and correct dangerous mis-infomation.
Supporters of the NHS are crucial in sustaining our health service and with your help we will be able to engage more people in securing its future.
Please donate to help support our campaigning NHS research and journalism.
Related Posts
Mackey wrong-footed as Treasury refuses to cover £1b redundancy costs
Government change course after lobbying on NHS spin-off companies
Dorset NHS hospital aiming for 500% increase in private patients to pay for buildings
Do claims of waiting list improvement stand-up?
Ten Year Plan – A tour of the dead ends
Streeting dredges up failed policies from 2000s