Article
Author(s):
Urologists outscore peers in e-prescribing, providing access to health information.
Eligible clinicians are halfway through the fifth performance year of the Quality Payment Program (QPP), which is the major determinant of any fee schedule adjustments for professional services paid by Medicare Part B. The same legislation that created the QPP Medicare Access and CHIP Reauthorization Act (MACRA) extended the authority of the Centers for Medicare & Medicaid Services (CMS) to publicly report performance information on doctors and clinicians on the Physician Compare website.1 The website is tailored to Medicare beneficiaries looking to compare individual providers in their area, but the entire dataset is available for public consumption.2 CMS recently published data based on the 2019 performance year—the most recent year available. In this article, I will examine urologists’ QPP Merit-based Incentive Payment System (MIPS) performance in 2019 relative to other eligible clinicians, how that information is displayed to patients on Physician Compare, and what this might mean for the future.
First, a brief refresher on the MIPS 2019 performance year. The final MIPS composite score was determined by performance in the following categories (weights): Quality (45%), Promoting Interoperability (25%), Improvement Activities (15%), and Cost (15%). Clinicians needed a final score of 30 or greater to avoid a negative fee schedule adjustment in 2021, and 75 or greater to qualify for exceptional performance. In response to the COVID-19 pandemic, CMS applied an automatic extreme and uncontrollable circumstances policy to all MIPS eligible clinicians for the 2019 performance year that resulted in a neutral adjustment for many clinicians. Before the impact of that policy, clinicians scoring below 30 would have incurred a negative fee schedule adjustment in 2021, and the “savings” from this would have been redistributed to those scoring above 30 via a positive fee schedule adjustment on a prorated basis. This redistribution scheme is among the entire pool of participating clinicians, but it is possible to examine performance by specialty by joining different CMS datasets. Finally, clinicians may report as individuals, as a group, or both. When possible, CMS will calculate the higher of the individual or group score. How did urologists do overall, and how did our specialty perform in 2019 compared with other specialties?
There were 719,244 individual clinicians in 85 specialties with a final MIPS score reported by CMS in 2019. The mean and median final MIPS composite score for the entire pool of clinicians were 85.6 and 92.3, respectively. The data include 6672 individual clinicians identified as urologists, with a mean final score of 85.2, and a median score of 91.4. Additionally, 5651 urologists (84.6%) achieved a score of 75 or greater and qualified for the exceptional pool vs 85.3% of all clinicians. At the other end of the performance spectrum, only 0.38% of all clinicians scored below 30, including 95 clinicians with a score of 0; only 5 (0.07%) urologists had scores below 30—none with a score of 0. The performance metrics of some high-volume specialties are included in Table 1.
CMS also publishes measure-level data in each of the categories. The data are reported separately for clinicians reporting individually vs as a group and contains information about which category measures—when there are several to choose from—are being reported. Remember that a urologist may report as part of a group whether they are in a single specialty, multispecialty, or even advanced payment model practice; most urologists reported as part of a group in 2019. In the Quality category, most groups reported EHR measures, followed by registry measures and claims-based measures in 2019. In general, urologists participating in group reporting used the same generic measures as most other clinicians, but with a few exceptions, scored worse on average. This is unsurprising. First, there were only a few urology-specific measures available in 2019. Second, some urologists were/are in multispecialty groups dominated by specialties that report on primary care measures. Third, the common EHRs used in 2019 by urologists were not configured to easily calculate and/or report on urology-specific measures or certified for those measures. Urologists forced to report on primary care measures can be forgiven for not performing as well as primary care colleagues.
In the Promoting Interoperability category, urologists outperformed their peers on e-prescribing and Provide Patients Electronic Access to Their Health Information, and underperformed relative to peers on Supporting Electronic Referral Loops. In 2019, electronic referrals may have still been uncommon among urologists and/or not fully supported by their EHR. Like most clinicians, urologists performed almost perfectly on Improvement Activity measures. CMS did not publish measure-level data in the Cost category for the 2019 performance year.
How is this performance information displayed to patients? I encourage you to go to Physician Compare and see for yourself. Search results for physicians return a page with information about EHR activities—patient view of Promoting Interoperability measures, security risk analysis, and registry reporting—and Improvement Activities. In the Quality section of the page, there are actual performance metrics for each of the Quality category measures and Promoting Interoperability measures. The measures are described in lay terms, such as “giving patients with diabetes a kidney exam” (Diabetes: Medical Attention for Nephropathy) with an explanation of the rationale: “Diabetes can cause complications, like kidney disease. Screening the kidneys of patients with diabetes can help patients receive timely treatment. To give this clinician a star rating, Medicare looked at the percentage of this clinician’s patients with diabetes who had a kidney exam or received treatment for kidney disease.” The performance is displayed as a “star rating,” which is a value from 1 to 5 that is mapped to the actual performance score for that particular measure.
What does this mean for you and your practice? First, almost all clinicians in the 2019 performance year would have avoided a negative adjustment in 2021 because of a low bar, which means few dollars/small positive fee schedule adjustments for those who scored well. Indeed, most clinicians qualified for exceptional performance, which is a temporary benefit under MIPS that is scheduled to expire at the end of the 2022 performance year. However, the threshold to avoid a negative adjustment is rising each year and is currently proposed to be 75 points for 2022. In 2019, about 15% of eligible clinicians scored less than 75, thus bigger fee schedule adjustments may be seen in future years. Second, these individual measure data provide a benchmark in the specialty for urologists to compare with other urologists. This may be helpful if urology-specific measures become more commonly used and data are provided more rapidly. For example, in 2019, a performance rate of 95 on the measure Prostate Cancer: Combination Androgen Deprivation Therapy for High Risk or Very High-Risk Prostate Cancer was significantly above average for all clinicians, but significantly below the mean score for urologists.
Finally, as the QPP matures, there is more information available to patients who are increasingly using tools, like Physician Compare, to evaluate their choices. When urologists start using specialty-specific measures, this information will be published and could directly influence patient choices. The trend is sure to continue as we march slowly—but relentlessly—to “value-based care.” Urologists should view their own data on Physician Compare to understand the full arc of MIPS data, its current limitations, and how their patients may interpret this information.
References
1. Physician Compare. Centers for Medicare & Medicaid Services. Accessed August 18, 2021. https://bit.ly/3meEZn9
2. Centers for Medicare & Medicaid Services. CMS Datasets: Doctors and Clinicians. Accessed August 18, 2021. https://bit.ly/3CZ9vqY