Selasa, 03 Juli 2012

Behind the Curtains of the Best Hospitals Rankings

Behind the Curtains of the Best Hospitals Rankings

The carmine-red cover of the April 30, 1990, issue of U.S. News World Report marked the debut of “America’s Best Hospitals.” The concept was a breakthroughâ€"hospitals had never been publicly rated beforeâ€"but the first effort was decidedly modest, listing eight to 16 hospitals in a dozen specialties based on a survey of 400 physicians. At that time, apples-to-apples clinical data showing how well hospitals delivered care didn’t exist, so out of necessity the U.S. News evaluation relied solely on hospitals’ reputations.
 
Much has changed. While reputation continues to play an important role in the Best Hospitals rankings, clinical data such as patient outcomes and processes of care have become central. We moved away from relying on reputation alone in 1993, adding mortality, nurse staffing, and other hard measures with a direct link to the quality of patient care. Clinical yardsticks now make up nearly 70 percent of a hospital’s score in most specialties. The evolution of the methodology toward broader and deeper use of objective measures has been possible because the federal government and the healthcare community have slowly but steadily opened important data vaults to public scrutiny.

Best Hospitals 2012-13, due out July 17 on USNews.com, will reflect the latest step in this evolution. This year we have made a further adjustment to reputational scores, starting with the Best Children’s Hospitals rankings released in June. Now it’s the turn of the adult Best Hospitals rankings. It’s an adjustment that continues the shift toward judging hospitals on information that is measurable, direct, and verifiable.
 
Bernadine Healy, my late U.S. News colleague and former director of the National Institutes of Health, staunchly defended our use of medical specialists’ opinions of hospitals in the rankings as a form of peer review. But like all sets of opinions, it is subjective. This year’s reputational modification reduces the tendency of hospitals with the highest number of physician nominations to bob toward the top of the rankings.

Taking some of the juice out of high reputational scores gives hospitals with solid clinical data more opportunity to show consumers how well they perform. Reputation will still count as 32.5 percent of a hospital’s overall score, but the modification will have the effect of shrinking the range of reputational scores, reducing the distance between hospitals. The U.S. News Best Hospitals Methodology Report, which will be available July 17, will include a description of the details behind the change.
 
Reputation still carries significant weight. But a look across the specialty rankings when they come out on July 17 will reveal many ranked hospitals with reputational scores in the single digits perched higher in the rankings, even a few with no reputation score at all, thanks to a solid performance in other quality metrics.
 
The weight of many of those metrics is about to change, too. Previously the weights in each Best Hospitals specialty assigned to patient volume, nurse staffing, advanced technologies, and other measures that indicate the soundness of a hospital’s superstructure of care depended on how closely the measures correlated with each other, a statistical approach called factor analysis.

As more data became available, however, weighing each measure’s importance by itself in relation to medical outcomes emerged as the better choice. Adding more nurses, for example, leads to improved care, no matter what relationship the nurse staffing measure does or does not have with any of the other measures. That is what we have done in the 2012-13 rankings. An expert panel convened by RTI International, U.S. News’s data contractor for Best Hospitals, served as advisers in assigning weights.

The change to reputation, coupled with the other methodology improvements made this year, contributed to some significant shifts in the upcoming rankings. This was expected. Past enhancements to the methodology have produced similar or greater shifts.

In 2007, for example, we began excluding patient transfers from mortality and volume information, to reduce the rankings benefit to hospitals that routinely move their sickest patients to other hospitals and the rankings penalty to the hospitals that receive them. We also altered our mortality calculations to include deaths within 30 days after admission, to send an unmistakable message that hospitals have a responsibility to not discharge patients too quickly and to oversee their care after they return home. Both of the changes were in line with good, safe care. Both had been advocated by progressive clinicians and data experts in the hospital community.
 
The same is true of this year’s changes. Good data and standards that reflect a high level of care will continue to set, and raise, the bar for Best Hospitals.

Tidak ada komentar:

Posting Komentar