Expert Voices

Valuing Physician Work in Medicare: Time for a Change


By: Miriam J. Laugesen, Ph.D., Assistant Professor, Department of Health Policy and Management, Mailman School of Public Health, Columbia University

For more than 20 years, Medicare payments to physicians have been based on a Resource Based Relative Value Scale (RBRVS) designed to capture the relative variation in physician work, practice expenses, and medical liability insurance costs for each of the more than 7,000 services provided by physicians. Not only do these valuations affect the relative profitability of specific services and the earnings of the specialties that provide them, their use by many Medicaid programs and commercial payers amplifies their impact well beyond Medicare. Despite the critical need to get the values right, however, there is considerable evidence that the values for many services are inaccurate, with misevaluations potentially encouraging provision of surgical and procedural services over primary care.1

The Centers for Medicare and Medicaid Services (CMS) makes annual updates to the RBRVS to reflect developments in technology and medical practice, which create new services and can change the time and effort required to deliver existing services. In addition, the law requires a comprehensive review of the fee schedule values every five years. Hundreds of annual updates and thousands of fee schedule codes make maintenance of the RBRVS a daunting task.

Since the inception of the fee schedule, CMS has relied on the American Medical Association’s Relative Value Scale Update Committee (RUC) to accomplish this work. The RUC is a non-governmental body with membership from the major specialty societies, primary care physicians, the AMA and the osteopathic and allied health professions. It meets three times a year to develop update recommendations for CMS. Between 1994 and 2010 CMS accepted almost 90 percent of RUC recommendations,2 although increasingly CMS has been more likely to disagree with the RUC.

Recent media reports have drawn attention to the role of the medical profession in the update process.3,4 Specialty societies and RUC leadership respond by emphasizing the unique expertise of the committee. What is the real story? In this essay, I provide insights gained from interviews with current and former RUC participants.* My observations confirm the dedication of the RUC members and staff but also reiterate concerns that have been raised by others5 regarding the reliability of the evidence underpinning the RBRVS.

Questionable Data, Selectively Used

To make its work-value recommendations, the RUC largely relies on specialty society surveys that collect data on the intensity of effort and amount of physician time required to provide specific services. Intended to reflect factors such as technical skill, physical exertion and mental stress, estimates of intensity of effort are necessarily subjective and prone to error. Time should be more easily measured, but as early as 2006 researchers used operating room logs to show RUC time estimates were off base.6,7 My comparison of those measured times to 2014 RUC times shows that RUC times remained longer than actual times for 20 of the 24 services studied (Figure 1).Across all 24 services, RUC times overstate real-world times by an average of 33 percent and by as much as 127 percent in one instance. Several problems with the methodology of the surveys and the way the data are used likely contribute to at least some of these discrepancies.

Small and Non-Random Samples

Until this year, the RUC required societies to survey a minimum of 30 physicians (it is now 50). At times, however, it has accepted even smaller samples and permits use of standing panels of physicians who complete surveys regularly. Such panels may not be broadly representative of physicians or specialty society members. For example, one society has used a panel drawn from its practice management section, whose members are likely to have a better understanding than most physicians of reimbursement policy and how survey results can influence payment rates. The problems introduced by small purposive samples are likely compounded by low response rates.

The Challenges of New or Specialized Procedures

Estimating work values for new services that are not yet widely disseminated in practice and for services that are provided infrequently can be challenging. The RUC requires three years of utilization data before it will review a new technology but does not appear to require a minimum number of survey respondents to be familiar with a given service once it is reviewed; therefore people who have never performed the specific procedure may be providing data. In other cases, societies rely on physician lists provided by device manufacturers to identify providers known to be using the procedure. Manufacturers’ interests in obtaining higher work values that could increase the uptake of their product might influence which physicians they nominate for the survey sample. While it is important to note that there is no evidence that such a bias has been identified, more scrutiny of the issues of physician familiarity with the services being assessed and the role of the device industry would be beneficial.

Selective Use of Data

Even when specialty societies have survey data from 50 or more generally representative respondents, the RUC allows them to use expert panels to develop alternative estimates if they deem the survey data to be “flawed or incomplete.” For example, as one participant told me, if survey data suggest work values should be lower, a society can put forth alternative estimates from an expert panel to override the survey data. Specialty societies making their case to the RUC have the discretion to ignore survey findings if they think the survey participants misunderstood the questions or undervalued the work involved. While the RUC may reject specialty recommendations, ultimately these kinds of ad-hoc adjustments can — and do — end up in RUC recommendations according to CMS, which has increasingly highlighted work values that are not consistent with survey data.8

Making Improvements

The first step to improving the quality of the evidence and strengthening the integrity of the RUC-centered update process would be to insist on surveys that meet scientific protocols. The RUC’s recent move to require 50 to 75 completed surveys when collecting data for services that are performed frequently is a positive step, but as others such as Robert Berenson have argued this change does not address the incentive for physicians to increase reported times.9 Other improvements might come from using independent organizations to manage the surveys and discouraging specialty societies and RUC members from cherry picking survey data. If there are reasons to suspect the reliability of certain data, these concerns should be made explicit in the recommendations submitted to CMS or, ideally, a new survey should be fielded using a different sample. Likewise CMS can enhance greater accuracy by requiring reporting of sample sizes, response rates, missing values and non-respondent characteristics and continuing to question inconsistencies between survey data and recommendations.

Continued attention to validation and external oversight of the RUC’s work will also remain important. The Affordable Care Act expanded CMS’s authority to review and adjust values for codes that are potentially mis-valued; as a result, CMS has undertaken new research to assess the time spent providing various services. Last year, Representative Jim McDermott introduced legislation to establish a new federal committee to review and supplement the RUC’s work. Although this bill never made it out of committee, in November 2013 the RUC began publishing meeting minutes and committee voting results.

In March legislation was passed that provided $2 million per year to CMS to collect additional data needed to determine appropriate relative values and mandated a report next year by the Government Accountability Office on the RUC process. And just in July in its proposed rule for 2015 physician payments, CMS sought comments on a plan to publish revised fee schedule values as proposed rather than interim final rules, allowing more time for public analysis and comment on the RUC recommendations.10 Ongoing evaluation of the update process thus appears likely – and desirable.

Citations
Show Details Hide Details
  1. Sinsky CA, Dugdale DC. “Medicare Payment for Cognitive vs Procedural Care: Minding the Gap.” JAMA Int Med, 173(18):1733-7, 2013.
  2. Laugesen MJ, Wada R, Chen EM. “In Setting Doctors’ Medicare Fees, CMS Almost Always Accepts the Relative Value Update Panel’s Advice on Work Values.” Health Aff, 31(5):965-72, 2012.
  3. Jennings K. “The Secret Committee Behind Our Soaring Health Care Costs.” Politico Magazine, August 20, 2014.
  4. Whoriskey P, Keating D. “How a Secretive Panel Uses Data that Distort Doctors’ Pay.” The Washington Post, July 20, 2013.
  5. Braun P, McCall N. “Methodological Concerns with the Medicare RBRVS Payment System and Recommendations for Additional Study.” Report to MedPAC, 2011.
  6. McCall N, Cromwell J, Braun P. “Validation of Physician Survey Estimates of Surgical Time Using Operating Room Logs.” Med Care Research Review, 63(6):764-77, 2006.
  7. Cromwell J, McCall N, Dalton K, Braun P. “Missing Productivity Gains in the Medicare Physician Fee Schedule: Where Are They?” Med Care Research Review, 67(6):676-93, 2010.
  8. Federal Register, Vol. 76, No. 228, November 28, 2011, p. 73105.
  9. Robeznieks A. “AMA’s RUC Panel to Provide Minutes in Limited Transparency Move.” Modern Healthcare, November 4, 2013.
  10. Federal Register, Vol. 79, No. 133, July 11, 2014, p. 40363.

* The Robert Wood Johnson Foundation Investigator Awards Program funded the research reported here.

 


More Related Content