Ranking groups that rank hospitals — U.S. News comes out on top
A new report in the New England Journal of Medicine is drawing fiery comments from organizations that received poor grades for their work in rating hospital performance.
The report, featured in NEJM’s Catalyst section, turns the tables on the organizations that rate hospitals and attempts “rating the raters.”
For consumers, hospital rankings and ratings can be confusing, given the number of rating systems, many of which produce different results. “The numerous currently available public hospital quality rating systems frequently offer conflicting results, which may mislead stakeholders relying on the ratings to identify top-performing hospitals,” according to the report.
No group received a perfect score, in this case an A, and no agency received an F.
The grades ranged from a B to a D+. U.S. News & World Report nabbed the highest grade, followed by the government’s own rating system, Hospital Compare, with a C. The CMS star ratings system is a frequent target of the American Hospital Association, which has called the program “flawed from the outset” and pushed for its elimination.
Of the four groups analyzed, Healthgrades and Leapfrog received the lowest grades of D+ and C-, respectively.
Overall, ratings systems should be used cautiously, the report said, as “they likely often misclassify hospital performance and mislead.” It’s not just patients using these rating systems — insurance companies use the data to steer patients to certain facilities and hospitals use the data as a way to identify areas for improvement, researchers said.
They also criticized monetizing ratings. “[H]aving hospitals pay the rating systems to be able to display their performance or to allow use of their ratings for hospital marketing or advertising — may create unfortunate incentives. Specifically, there is a concern that the business of selling these ratings leads to a model that encourages multiple rating systems to intentionally identify different ‘best hospitals.'” the authors wrote.
The review focused on the strengths and weaknesses of the ratings and the grades were based on whether a certain rating would be likely to mislead. For example, an ‘A’ would mean the rating system was ideal “with little chance of misclassifying hospital performance.”
But the report drew colorful comments from leaders of both Healthgrades and the Leapfrog Group, which performed poorly in the review.
“The authors are entitled to their own opinions and it is valuable to hear their perspectives. However, they are not entitled to their own facts,” Leapfrog Group’s CEO and President Leah Binder said in a statement.
Binder criticized the authors of the report and said the article contains “serious errors.”
“In addition to basic fact-checking, future iterations of this paper would have greater credibility if the majority of authors were not employed at health systems with a history of feuding with one or more of the ratings organizations they analyze,” she said.
The authors refute the statement that any of them are “feuding” with the ratings systems.
The report found several weaknesses with Leapfrog’s ratings system. The authors’ greatest concern was with Leapfrog’s safety survey. “The survey is self-reported and there is not a robust audit in place,” the report noted.
Also, Leapfrog’s ratings exclude mortality as a safety metric, which the authors characterized as a notable oversight.
Healthgrades called the article a “highly inaccurate portrayal” of its hospital ratings. “The authors only assessed our overall hospital award (and misrepresented that methodology) and they conveniently did not include an analysis of our other service line awards, which would have addressed many of the criticisms in the article,” Mallorie Hatch, director of data science at Healthgrades, said in a statement to Healthcare Dive.
The report found weaknesses in the way Healthgrades arrives at one of its scores, the composite score, which the report said focuses on outcomes data, which has its own limitations. The report also noted that “their methods are not sufficiently described to allow replication and evaluation.”
But Ben Harder of U.S. News & World Report, which did lead the pack with the highest grade, seemed less critical of the authors and their findings. He said the report had constructive ideas for improvement and U.S. News will consider those.
Karl Bilimoria, one of the report’s authors and a physician at Northwestern Medicine, said the authors expected comments like these.
“We understand that for them it’s hard to be graded for the first time,” he told Healthcare Dive. “We knew this would happen, so we shared everything with them from the beginning.”