BILLING CORNER


Relativity

  • Methodology
  • Correspondence w/ OMA
  • History
  • Resources

Current CANDI Relativity Methodology

Gross Daily Income (GDI): The current CANDI methodology is based on the notion that the relativity between OHIP specialties can be determined by comparing an appropriately adjusted net clinical income per day. Specifically, the methodology starts with the gross daily income (GDI), defined as the average daytime (7AM-5PM), weekday (Monday to Friday) fee-for-service billings per day per physician for each OHIP specialty. This definition excludes weekends, holidays, days with no billings, and after-hour billings during weekdays. The GDI is then adjusted by a set of six modifiers to account for the non-fee-for-service sources of income, frequency of billing, hours of work, overhead, and years of training.

Six Modifiers:
1. The Non-Fee-For-Service Modifier adjusts the GDI (which comprises only the professional fee-for-service payments) to also include non-fee-for-service clinical sources of income, such as alternative payment plans (APP), primary care payments (e.g. Capitation and Comprehensive Care Management fee), Hospital On-Call (HOCC) funding, Workplace Safety and Insurance Board (WSIB) claims, non-fee-for-service psychiatric payments (e.g. mental health sessional fees and psychiatric stipends, Assertive Community Treatment, Divested Provincial Psychiatric Hospitals, Ontario Psychiatric Outreach Program), and Paediatric Hospital Stabilization Fund.
2. The Per Diem Modifier accounts for the fact that some fee-for-service codes are paid for services rendered over an extended period of time (e.g. week, month, or year).
3. The Hours of Work Modifier accounts for differences in the actual hours of work during the daytime weekday period,
4. while the Overhead Modifier adjusts the GDI for expenses that physicians incur for earning their clinical income.
5/6. The last two modifiers, the Opportunity Cost Modifier and the Skill Acquisition Modifier adjust the GDI for foregone income and the value of additional skills, respectively, because of differences in the length of medical training between OHIP specialties.

Adjusted Daily Income: The GDI, adjusted for these six modifiers, is known as the Adjusted Net Daily Income (ANDI) and is calculated for each OHIP specialty and for specialists as a group (the target ANDI). The relativity position of each OHIP specialty is then determined as the ratio of the specialty’s ANDI to the target ANDI. The most recent CANDI results (Comparison of the Average Net Daily Income), for fiscal 2014/15, are presented in Table 1.


March 11, 2016
Re: CANDI Review - Correspondence from Section on General Surgery to OMA Economics, Research & Analytics

March 11, 2016.

Kate Damberger
OMA Economics, Research & Analytics

Dear Kate,

RE: CANDI Review

The Section on General Surgery offers the following feedback and suggestions for the review of relativity methodology and data.  We have previously made a number of submissions to OMA reviews regarding CANDI, and hope that those will also form part of the review.

  1. We continue to be concerned that work intensity and complexity is not a variable for relativity determinations.  We believe these issues are important for achieving a fair fee schedule which compensated those performing more complex clinical work at a higher rate.
  2. We feel strongly that actual years of training rather than minimum years of training should be used in CANDI calculations.  For example, data on recent general surgery graduates reveals that the average years of traning is 6.8, while CANDI uses a value of 5 years.  Such information should be relatively easy to determine, and could be recalculated every 5 years or so as it is unlikely to change significantly over short periods of time.
  3. Years of training is particularly relevant for the Section on General Surgery as many of our members are subspecialists and have required significant extra subspecialty training.  For example, hepatobiliary surgeons, colorectal surgeons, pediatric surgeons, and surgical oncologists are members of the General Surgery Section, and these individuals typically require a minimum of 7 or 8 years of training, rather than 5 years.  This is an issue of basic fairness for all Sections in this process.  For example, medical subspecialties have their own Sections, and are not restricted to the 4 years of training in the formula which is the minimum for internal medicine.  Similar equity needs to be extended to the Section on General Surgery which “houses” a number of surgical subspecialties.
  4. A reasonable approach needs to be developed to address billings from fee codes which compensate for 24 hours of care, where the vast majority of the care is delivered during typical weekday daytime hours.  For example, dialysis fees are discounted to 50/168 hours, even though the vast majority of the physician care is typical daytime work.  Under that analogy, all surgical fee codes should be reduced to a similar fraction as they include bundled post-operative care for most of the 2 weeks post-operatively.  Thus, one could conclude that only 30% of the fee is for the daytime weekday care, and 70% of the fee should be discounted as it represents after-hours and weekend work.  We need to develop a rational mechanism to address this issue in an equitable manner.  The currently used per diem modifier is not equitable and is not an appropriate mechanism to determine allocation of certain fees to daytime versus after-hours work.
  5. The current opportunity cost modifier disproportionately rewards higher earning specialties, and thus distorts relativity further.  We need a consistent modifier for additional training that treats all Sections equitably.
  6. Analysis should be performed of the typical career span for each specialty, including family practice.  It is likely that in certain specialties, most likely surgical specialties, one cannot continue a full practice beyond a certain age, and the career span may be more limited, with the individual semi-retiring into less remunerative practice.
  7. A better appreciation should be developed for part-time work, in particular to compare compensation when physicians only work a part-day rather than a full day.  Many “full-time” physicians work 8 to 9 hours a day, and then complete charting and administrative work after-hours.  On the other hand, many physicians would work a half day, then completing such administrative work during the regular day resulting in a calculation of 8 or 9 hours of work.  A consistent approach needs to be developed to define clinical work and measure the hours of such work.
  8. We strongly believe that the review of data used in relativity calculations needs to be comprehensive and needs to include all the key elements, including income, overhead, and hours of work.  The OMA has been continually updating income data over the past 5 years, and has made some minor adjustments to overhead data, but the hours of work data has not been reviewed or updated.  We believe that income data is relatively easy to obtain, often through OHIP data taking into account after-hours modifiers, and overhead data can also readily be obtained and can often be modelled to ensure reasonable overhead calculations.  However, hours of work data should to be obtained in an observational manner and based on consistent definitions of what constitutes clinical work.  We believe this can be readily obtained and at a reasonable expense by utilizing basic research methodology.  A randomized approach should be used to select a representative sample of a Section's membership for observational study to record hours of work in a typical week, and compare such work hours to the OHIP billings data from that week.  The observational work can be conducted by students involved in a research endeavour for the OMA.  Patient confidentiality and privacy can be readily assured.  Section leaders should support mandatory participation for randomly selected clinicians.  The random selection of participants would enhance validity of the data compared to the current self-reported data of self-selected Section members, with much of the data determined from the participation of only one or two Section members.

We strongly support the goals to improve and ultimately achieve relativity for all physicians.  We believe the OMA's current initiatives can and should be improved.  We would be happy to provide further input and engage in a discussion of the items and concerns we have raised, and on the issues raised by others.  We hope and expect there will be a fullsome review of all the methodology and data issues related to CANDI and relativity.

Sincerely,
Jeff Kolbasnik, Chair, OMA Section on General Surgery
Chris Vinden, Vice-Chair, OMA Section on General Surgery

January 21, 2016
Re: CANDI Review - Correspondence from OMA President Mike Toth to Section on General Surgery

January 21, 2016

Dear Section Chair,

The issue of income relativity continues to be a significant issue for physicians and the Ontario Medical Association.
At the Fall 2015 meeting of Council, a motion was passed requesting additional research on relativity including soliciting written input from all Sections on the current relativity methodology used by the OMA - the CANDI (Comparison of Adjusted Net Daily Income) methodology.

I kindly ask that your Section Executive review the CANDI methodology, which has six main components listed below, and provide us with your comments on whether they are satisfactory or require further refinement:
 Gross Daily Income
 Non-FFS Modifier
 Overhead Modifier
 Hours Modifier
 Training Modifier
 Skill Acquisition Modifier

Your general input on the CANDI methodology is welcome. For your reference, the attached Appendix and links below will direct you to a number of CANDI relativity methodology resources on our OMA Members website:
1. A description of the CANDI Methodology in full detail.
https://www.oma.org/Member/Resources/Documents/CANDIReport2009.pdf
2. An update to the CANDI methodology incorporating the results of the Pricewaterhouse-Coopers survey and OMA section’s input undertaken by the CANDI Relativity Implementation Committee.
https://www.oma.org/Member/Resources/Documents/CRICRelativityMethodologyReviewApril2012.pdf
3. The October 2015 CANDI methodology technical update undertaken by ERA.
https://www.oma.org/Member/Resources/Documents/TechnicalCANDIRelativit%20Methodology.pdf
4. All other documents relating to Relativity can be found on our relativity home page.
https://www.oma.org/Member/Resources/AgreementCentre/Pages/Relativity.aspx

In order to enable us to report to Spring 2016 OMA Council, we ask that you forward us your feedback in writing no later than March 11th, 2016. Please send your submission to Kate Damberger, OMA Economics, Research & Analytics, at ....

Sincerely,
Dr. Michael Toth
President


Appendix: Current CANDI Relativity Methodology
The current CANDI methodology is based on the notion that the relativity between OHIP specialties can be determined by comparing an appropriately adjusted net clinical income per day. Specifically, the methodology starts with the gross daily income (GDI), defined as the average daytime (7AM-5PM), weekday (Monday to Friday) fee-for-service billings per day per physician for each OHIP specialty. This definition excludes weekends, holidays, days with no billings, and after-hour billings during weekdays. The GDI is then adjusted by a set of six modifiers to account for the non-fee-for-service sources of income, frequency of billing, hours of work, overhead, and years of training.

The Non-Fee-For-Service Modifier adjusts the GDI (which comprises only the professional fee-for-service payments) to also include non-fee-for-service clinical sources of income, such as alternative payment plans (APP), primary care payments (e.g. Capitation and Comprehensive Care Management fee), Hospital On-Call (HOCC) funding, Workplace Safety and Insurance Board (WSIB) claims, non-fee-for-service psychiatric payments (e.g. mental health sessional fees and psychiatric stipends, Assertive Community Treatment, Divested Provincial Psychiatric Hospitals, Ontario Psychiatric Outreach Program), and Paediatric Hospital Stabilization Fund. The Per Diem Modifier accounts for the fact that some fee-for-service codes are paid for services rendered over an extended period of time (e.g. week, month, or year). The Hours of Work Modifier accounts for differences in the actual hours of work during the daytime weekday period, while the Overhead Modifier adjusts the GDI for expenses that physicians incur for earning their clinical income. The last two modifiers, the Opportunity Cost Modifier and the Skill Acquisition Modifier adjust the GDI for foregone income and the value of additional skills, respectively, because of differences in the length of medical training between OHIP specialties.

The GDI, adjusted for these six modifiers, is known as the Adjusted Net Daily Income (ANDI) and is calculated for each OHIP specialty and for specialists as a group (the target ANDI). The relativity position of each OHIP specialty is then determined as the ratio of the specialty’s ANDI to the target ANDI. The most recent CANDI results, for fiscal 2014/15, are presented in Table 1.

March 22, 2012
Re: Relativity, CANDI, and CANDI Studies - Correspondence from Section on General Surgery to CRIC (CANDI Relativity Implementation Committee)

March 22, 2012.

To: CRIC members
RE: CRIC Draft Report

We have reviewed CRIC’s draft report and offer the following comments.

We are very dismayed that CRIC has rejected practically all of the substantive concerns and recommendations offered not only by our Section but also by many other Sections.  The implementation of hours of work data into the ultimate relativity calculation was always an expectation once data was collected, and the other two changes recommended by CRIC are minor.  We feel strongly that serious issues regarding methodology and data quality have been raised, and are appalled that these have been roundly ignored and dismissed by your committee.

It seems pointless to raise further concerns, but we will mention a few.  The opportunity cost modifier and skills acquisition modifier are important issues for most specialty groups, and the minimal Royal College years of training do not adequately measure true length of training.  The CAPER data the committee refers to does not appropriately incorporate true length of training.  In our Section, many members train well beyond 5 years; most subspecialists such as colorectal surgeons or hepatobiliary surgeons routinely train an additional 2 years beyond their 5 years of general surgery, and are full members of our Section.  Many others in academic practices obtain even more training.  Furthermore, the committee’s comparison to a mean number of years rather than the average number of years is misleading.  To exaggerate the point, if 51% of physicians train for 5 years and 49% train for 25 years, the mean years of training remains 5 years while the average length of training is closer to 15 years.  Also, the opportunity cost modifier and SAM modifier do not adequately measure the benefit of accumulated wealth early in one’s career, and the compounding that occurs over time.  These modifiers need to be adjusted.

It is clear the committee has now rejected PricewaterhouseCoopers’ assertion that the data in their report is accurate and highly reliable.  The committee has decided not to use the PwC data for income, and our own Section’s submission has objectively demonstrated how flawed the income data is.  However, the committee subsequently rejects the view of most Sections that the low response rate to the survey raises serious doubts regarding the accuracy of the data, and insists on cherry-picking which data from the report to rely on.  In particular, our Section has serious concerns about the hours of work data.  While general surgeons are felt by most to be amongst the hardest working specialists with amongst the longest hours of work (including daytime 7 am till 5 pm work), the PwC data suggests we work far fewer hours then most of our colleagues.  We are convinced this data is inaccurate, and has severely disadvantaged our specialty.  CRIC cannot acknowledge doubts regarding the accuracy of the data in the report, and then rely on this data for such critical calculations.

We would remind CRIC members that PricewaterhouseCoopers were also the auditors for ORNGE, the provincial air ambulance system now mired in financial and operational scandal.  Tens of millions of dollars remain unaccounted for in a forensic audit of its finances.  We cannot understand the blind trust that CRIC members have given PricewaterhouseCoopers and its work, particularly when this company has failed to maintain public trust in its duties with another provincial health care company.

We believe a concerted effort to obtain accurate data on both hours of work and true opportunity cost are necessary and should be actively and urgently pursued.  We believe this can be accomplished in an effective and low cost manner.  We feel strongly that the efforts to address relativity thus far have been inadequate and based on inaccurate data, and that a thorough and reliable approach needs to be pursued.

Sincerely,

Angus Maciver, Chair, Section on General Surgery
Jeff Kolbasnik, Vice Chair, Section on General Surgery
Chris Vinden, Tariff Chair, Section on General Surgery

January 7, 2012
Re: Relativity, CANDI, and CANDI Studies - Correspondence from Section on General Surgery to CRIC (CANDI Relativity Implementation Committee)

January 7, 2012.

From: The OMA Section on General Surgery
RE: Relativity, CANDI, and CANDI Studies

We are writing to provide our input to the current deliberations on relativity, and specifically the methodological basis for the CANDI formula and the data and conclusions derived in “The OMA Study of Income, Overhead and Hours Worked”.  Our Section has significant concerns on how the OMA has handled each of these issues.

CANDI Methodology

1) The CANDI methodology is a pure fee-for-income approach to relativity.  It assumes that the activities of physicians from all specialties and family practice are worth equivalent amounts, after modifying for factors such as overhead and length of training.  In effect, the methodology discounts any notion of complexity and work intensity.  We disagree with this approach.  We believe a fee-for-service approach which recognizes differences in complexity, intensity, and risk for various services, or at least various types of services, is far more appropriate.  The OMA has argued that Sections can allocate higher fees to various types of services in an intrasectional manner.  For example, surgical sections may choose to make operative fees relatively higher valued than office work fees, to account for differences in complexity and work intensity.  However, this creates a situation where the office work of surgical specialists would be paid at a lower level than the office work of others, particularly those who are exclusively office-based.  Taken to its logical conclusion, the current approach would pay surgeons and surgical assistants a similar hourly fee, while obviously the intensity, complexity, and risk are far higher for the surgeon.

2) We agree that non-fee-for-service payments, such as HOCC and capitation payments, should be included in the calculation of income.  However, such changes to the calculation of income are not unique to the CANDI methodology, and could be easily adjusted in other methodologies such as RVIC.

3) We do not agree that the training period opportunity cost should be calculated based solely on the minimum years of training by specialty as required by the Royal College.  In many specialties, additional training is expected if not required for many work opportunities.  For example, in general surgery, a number of subspecialties exist, such as colorectal and hepatobiliary surgery, and these routinely require an additional 2 years of fellowship training.  Many surgeons in academic practice undertake not only fellowship training but additional graduate program training.  Even surgeons in community practice often undertake an additional 1 to 2 years of training to be able to secure jobs.  We believe a true accurate length of training modifier can be easily derived by assessing the actual length of training undertaken by physicians in practice.

4) We furthermore believe that a real assessment of opportunity cost should consider the length of career within a specialty rather than merely length of training.  For most surgical specialties, the physical rigors of the job limit career length, and with advancing age, many surgeons are forced to limit their practice to less remunerative activities, such as minor procedures or surgical assisting, or to retire earlier than one would in a specialty that did not have the physical rigors of a surgical career.

5) We have additional concerns with the methodology used in the determination of opportunity cost.  We do not believe a simple comparator to income in family practice is appropriate.  Furthermore, the formula fails to account for interest and inflation.  Using a discount rate of 5%, as in most academic papers assessing opportunity cost, the average opportunity cost of an additional 3 years of training would be 19.5% rather than the 8.1% calculated for the CANDI methodology.  

6) While we are ambivalent regarding CANDI’s omission of income from private sources (eg. cosmetic surgery), we do not believe the overhead calculations appropriately account for the overhead related to these services.  For physicians providing both public-funded and privately-funded services, a substantial portion of the overhead may be related to the latter, yet in the CANDI formula and data, it may all be apportioned to the public side.

7) We again note our concern that over the past few years, relativity funds have been apportioned purely based on daytime income, without adjustment for hours of work.  This approach has specifically disadvantaged physicians who work the longest hours.

8) We do not agree with the exclusion of after-hours income and after-hours work data from relativity calculations.  Such after-hours work is a well-recognized component of medical practice.  It is integral for emergency medicine and for major services providing on-call work.  For some Sections, it is a major component of income.  Some Sections, such as anesthesia and obstetrics, often adjust clinical practice such that physicians do not work the day post-call, to account for the substantial component of clinical work delivered after-hours.  We believe this income should be included in income calculations, with an accompanying adjustments for hours worked with a modifier to account for the hours worked being at “unsociable hours”.

9) The current CANDI methodology, and the subsequent studies, do not adequately account for work done after-hours, but associated with daytime income.  For example, many physicians provide after-hours MRP care to hospitalized patients, yet do not earn any extra income for this work.  Many others perform work such as dictating clinic notes after-hours, even though this is an integral part of the service for which income is generated and considered daytime income.  These issues are most acute for surgical specialists; given constraints of providing much of our hospital-based work during regular daytime hours, surgeons frequently carry out activities such as dictating notes or seeing in-patients after-hours, without any accompanying recognition of these work hours in the CANDI methodology or in The Study.  The three authors of this response would estimate we each provide approximately 1-2 hours of work per day which is not being recognized as work hours in this methodology.  We believe the CANDI methodology and its application are specifically biased against surgical practice.

OMA Approach to Obtaining Data on Income, Overhead, and Hours Worked

The Section on General Surgery has participated actively in the OMA’s activities and deliberations regarding relativity over the past few years, both at a Section level and through our involvement with the Surgical Assembly.  We met a number of times with the RVIC Methodology Review Working Group.  We expressed our concerns regarding the CANDI methodology.  We considered, and agreed with, the 3 main criticisms the Working Group has heard from Sections regarding the RVIC methodology and data, those being that:

1) The data is self-reported and not verified as accurate.
2) The data is now old.
3) Many income sources were not included.

We completely agreed that the data needed to be updated and more current, and that a broader range of income sources needed to be included in the calculation.  Please note that these changes did not require developing a new methodology such as CANDI, and that RVIC could be adjusted for these.

Most significantly, though, we agreed that the data cannot be self-reported, and should be objectively obtained in a manner to ensure accuracy.  We were assured by the Working Group that such objective data collection would be done.  The Working Group postulated that such an exercise would likely cost several million dollars, and would involve real-time observations and audits of physician practices.  There seemed to be broad agreement that while this exercise would be complex and costly, it was a worthwhile investment and absolutely necessary given that hundreds of millions of dollars were being allocated annually based on relativity calculations.

We were thus horrified to learn, just prior to a Council meeting, that the OMA Board had already hired a consulting firm to conduct these studies, with a budget of only $600,000, and that the studies would be conducted again as a self-reported survey rather that objective observations and data collection.  Debate at Council to reconsider the data collection process was ruled out of order, as the OMA had already signed a binding contract with PricewaterhouseCoopers which stipulated the nature of the data collection.  We were thus very upset that such a major change in study methodology was approved by the OMA Board without additional input from Sections.

It is also worthwhile to note that the other consulting firms which expressed an interest in the project ultimately advised the OMA Board that it was not possible to obtain accurate, valid, and objective data on income, overhead, and hours of work for the limited budget which the OMA assigned to this project.

Furthermore, the OMA has refused to share the nature of the commitments and obligations agreed to by PricewaterhouseCoopers in its contract for carrying out these studies, despite assurances from OMA legal counsel that the contract is not confidential and can be shared with physicians and even the public at large.  It is important to know whether the OMA Board set performance expectations for PwC, and whether those performance obligations were met.  We would consider the OMA Board to have been negligent if such obligations were not set in the contract, and we would consider PricewaterhouseCoopers to not have fulfilled the contract if such obligations were set and not met.  Most significantly, it is important to know whether PwC and the OMA Board agreed to a minimum physician participation rate in the study in order to ensure accuracy and validity, and whether this participation target was met.  We would again suggest that the OMA Board would have been negligent if it agreed that the extremely low participation rates achieved in the studies were adequate to ensure accuracy. 

We also note that the OMA Board and the CANDI Relativity Implementation Committee (CRIC) have been excessively protective of PwC, refusing to force them to specifically account for statements and assurances of data accuracy, reliability, and generalizability.  We have been refused requests to have a PwC representative attend Assembly meetings to discuss these studies, despite the fact that CRIC members and OMA Economics staff have not been able to answer specific questions related to these studies.  We have even had the CRIC Chair blame physicians in general for any inaccuracies in the studies, rather than PwC or the OMA Board.  It is her opinion that lack of physician participation in the studies has nothing to do with the ambiguities and complexities in the survey and subsequent site visits, despite warnings from my own Section and others that the PwC approach would not work.  If indeed the payment to PwC depended on achieving performance targets, we should be even more skeptical of their repeated and unsubstantiated comments that the data in their report is of “high quality”.  We find it remarkable that the OMA Board has allowed PricewaterhouseCoopers to rate the quality of its own work, particularly when its payment may be dependent on such quality, without an independent audit and assessment of its work.

It is also noteworthy that the OMA Board refused to give Sections access to the raw data used by PwC to make their calculations and reach their conclusions.  Many of us believe this data, and its interpretation, is inaccurate, yet the lack of access to such data limits our ability to present these concerns.  Even the Chair of OMA Council commented to one of us that surely the OMA Board would have to give Sections access to such data in order to allow meaningful concerns to be raised.  The Board’s interpretation that such access would compromise assurances of confidentiality for study participants is absurd.  Physician identifying data would certainly be removed from the data presented.  Indeed, the Board’s assertion would mean that the Board itself breached such confidentiality when it presented the report, which included data from 7 Sections where only a single physician’s practice was assessed.  We are concerned that the OMA Board wishes to limit access and transparency in order to limit criticism of its own work.

We would also note that the studies contemplated two years ago were meant to provide definitive data on which to base relativity allocation decisions.  Even the OMA Board is now backing away from endorsing the PwC data, calling the studies “another piece of the puzzle” and some “additional data” to consider.  Several members of CRIC and the Board have commented that some of the data in the report doesn’t pass the “sniff test”.

The Studies and the Final Report

We again re-iterate our grave concern that these studies were carried out as a self-reported survey, rather than as direct observational studies.  The surveys themselves were tremendously flawed and ambiguous, and did not capture the true nature of medical practice.  There are many models of direct observational studies that could have been pursued, many at far lower cost than was achieved by paying a high-priced consulting firm to carry out inadequate work.  The subsequent “validation” site visits are a misnomer, as the only validation activity carried out during these visits was confirming that the physician provided the answers reported on his or her survey.  No actual validation of hours worked was carried out.  The only objective income data element obtained was the gross income earned, without an understanding of how much of it was due to publicly-funded services, and how much may have been earned after-hours.  The only objective overhead expense data element obtained was total expenses reported on tax returns, again with no understanding of how much of this overhead may not be related to publicly-funded services.  We thus cannot have confidence in the accuracy or validity of the data presented.

The participation and sample size of physicians surveyed is a major concern.  PwC states that 8.7% of physicians participated.  However, when one reviews Table 15 (p. 75), it is evident that a number of surveys were then excluded, bringing the true participation in the survey portion down to about 7.5%.  Then, when physicians were contacted to arrange site visits, one-third refused the site visits, and another one-third didn’t even return the call.  Thus, the true participation rate in the study is only about 2.5%.  It is important to note also that this is not a random sample, but rather self-selected participation.  It should be self-evident to all physicians that such a low participation rate would leave the accuracy of the results in doubt.  Furthermore though, the data needs to be assessed on a Sectional level, and thus the participation for a number of Sections is far lower than even 2.5%.  PricewaterhouseCoopers continues to insist the participation rate is sufficient to be confident the data is accurate, but we would note that in the Negotiations Survey presented to Physician Leaders at the October 15/16 OMA meetings, our consultant for that survey claimed that the 13% physician participation rate was inadequate to draw any accurate conclusions at a Section level.  Most of us would find the latter assertion far easier to believe.  The suggestion that a 2.5% participation rate is adequate to ensure data accuracy seems absurd. 

We would point out that in the Board’s own presentation to OMA Council in November, 2009, the Board cautioned that “regardless of methodologies used (surveys, audits, focus groups, etc.), results will be based on samples – may have only few dozen observations per Specialty”.  Yet in The Study, only two Sections (Anesthesia and Family Practice) had even two dozen physicians participate.  Twenty-two Sections had less than a dozen participants, despite the fact that a number of Sections were even grouped together.  We thus suspect that even the OMA Board never imagined that the study would be based on such low participation and that the study results would be so suspect.

We would further recommend that CRIC and OMA Board members read the article “Users’ Guide to the Surgical Literature: How to Assess a Survey in Surgery”, published in the December, 2011 edition of the Canadian Journal of Surgery.  This guide is valuable not only for surgical studies, but any survey-based studies.  Quoting from the article: “Many investigators consider a response rate of 70% adequate for generalization to the target population, though this may vary according to the purpose and nature of the study.  Some investigators consider a response rate between 60% and 70% (or less than 60% for controversial topics) acceptable.”  Please compare that to the response rates of less than 10% in the PwC studies.

PricewaterhouseCoopers suggests in Appendix C that the site visits were selected to obtain a broad representation of Sections and geographic locations.  However, it is evident that nearly every physician who participated in the surveys had to be contacted to obtain 390 physicians who ultimately agreed to site visits.  PwC states that “OHIP Specialties were allocated a minimum of 5 visits”, yet 14 of the 32 Sections had fewer than that, including 8 Sections (25% of all Sections) where no physician or only a single physician was involved in a site visit.  That rate leaves the data obtained very suspect in terms of its generalizability to all physicians in the Section in general.

We are further concerned that there are areas of the report which seem to be deliberate misrepresentations of the data.  For example, Table 6 (pp. 33-34) compares the income data from the survey to that used in CANDI thus far.  There are many dramatic variances to either the positive side or the negative side, ranging from -26.9% to +69.2%.  PwC concludes that the two sets of data are similar, with a variance of less than 4%.  However, that is a cumulative variance for all the Sections where the negative variances cancel out many of the positive ones.  The true average variance, whether positive or negative, is 21.8%.  Thus, there is significant discrepancy even in the basic income data between the PwC data and the objectively-obtained OHIP data used in CANDI.  For our Section, PwC overstates the income by 19.2%.  We are concerned that the PwC assertion is a deliberate misrepresentation, and may be indicative of a pattern of other misrepresentations in their report and presentations.

For our own Section’s data, we suspect the hourly overhead amount of $68 is fairly accurate, but we believe the hourly income of $267 is too high and the clinical hours per day of 7.8 is too low.  On the income line, even assuming the 7.8 hours per day is accurate, that would equate to an income of $2,082.60 per day, $10,413.00 per week, and $489,411 per year (for a 47-week work year).  Since this number is supposed to represent only day-time income, and assuming about 18% of general surgeons’ income from after-hours work, that would mean a total income of about $577,504.98 per year.  Yet we know from OMA Economics data that the average general surgeons’ billing income is $373,528.56.  This represents a difference of over $200,000.00!!!  Additional non-fee-for-service income, such as HOCC and APP income, would account for a tiny portion of this difference.  Thus, the income calculation for our Section by PwC is bound to be inaccurate.  If the hours of work was accurate (certainly more than 7.8), then this income miscalculation by PwC would be even more dramatic.

In terms of hours of work, the report suggests general surgeons work half-an-hour less than the rest of our surgical colleagues, and in fact even less than anesthetists and gastroenterologists (two of our easiest non-surgical comparators).  We cannot believe that assertion is accurate, based on our and our members’ clinical experiences.  We also note from Table 25 (pp. 90-92) that while the mean hours of work for our Section is 7.8, the median is 8.3.  This difference reinforces our contention that with such small sample sizes, a few non-representative data elements can dramatically skew the results.  We are convinced that general surgeons work more hours per day than what is presented in the report.

We furthermore cannot understand why the data for general surgery and vascular surgery are being pooled.  We were assured that would not occur, and indeed there is no reason why the data cannot be presented and assessed separately.

We are certain the aforementioned concerns represent only a portion of the inaccuracies in the report by PwC.  Their methodology for the study was quite simply unreliable, and the participation rate was too low, for there to be any confidence in their conclusions.

We are disturbed as well by the deliberation of PricewaterhouseCoopers to “sell” their report as accurate and reliable.  The very first page of the report is devoted to 3 glowing quotes, attributed to un-named physicians, praising the studies and CANDI methodology.  Nowhere do we find constructive criticisms of the studies, including those expressed at OMA Council, and those indirectly expressed by the 90%+ of physicians who refused to participate in these studies.   We thus do not believe the PwC report is fair, balanced, or objective.  We believe that PwC intentionally overstates the accuracy and reliability of their conclusions.

We are furthermore concerned by the clear and deliberate public misrepresentations of this work by OMA leadership.  Most recently, in the December 13, 2011 issue of “The Medical Post”, OMA President Dr. Stewart Kennedy is quoted misrepresenting the relativity studies as follows: “In the past, physicians were concerned the data were not a good reflection of their hours worked.  So we did a real-time thing, where someone from PwC (PricewaterhouseCoopers) comes to your office and sees how you work.”  Dr. Kennedy knows full well that no real-time study was done, and that PwC did not observe physicians working in their office but rather spoke to them regarding their survey responses.  Dr. Kennedy claims the studies give “good objective data”, which they do not (the data is not objective), and claims “you need the hardest, most accurate data you can achieve, so that’s what we are working on”, despite the fact that the OMA Board has rejected studies which would have obtained hard, accurate data and instead proceeded with a self-reported survey.  There have been previous misrepresentations of the OMA’s current work on relativity, particularly in “The Medical Post”.

Next Steps

Relativity is a real and critical issue for our profession.  Significant money from OMA-Ministry Master Agreements is likely to be devoted to relativity; in the past three years, over a billion dollars has been allocated based on relativity calculations.  Relativity is an issue that impacts not only physician income, but may impact specialty choices, physician distribution, and practice patterns.

It is thus critical that we achieve a fair, accurate, and reliable assessment of relativity.  We believe the data from the current studies is inaccurate and cannot be relied upon.  We believe the OMA should use this opportunity to review the methodology of the CANDI formula and to correct the inadequacies of this approach.  We believe additional elements of relativity need to be examined, and some current elements need to be refined.  We then believe the OMA should devote adequate resources to robust studies of the key elements of relativity, those being income, overhead, and hours worked, in order to obtain accurate and reliable data for use in relativity calculations.  We are hopeful that physicians will be supportive of a meaningful effort to obtain accurate data to ensure fair treatment of all members of our profession.

Respectfully submitted,

Angus Maciver, Chair, Section on General Surgery
Jeff Kolbasnik, Vice Chair, Section on General Surgery
Chris Vinden, Tariff Chair, Section on General Surgery


OMA Section on General Surgery
Submission Regarding Relativity, CANDI, Studies
(Dr. J. Kolbasnik – Jan.7, 2012)

Executive Summary

1) Relativity is a critical issue for our profession.
2) OMA activities on relativity over the past 4 years have failed to advance this issue.
3) The CANDI methodology is flawed in many aspects.
4) The CANDI methodology and subsequent work has failed to address the main flaw of prior relativity data, that being that the data was largely self-reported and not verified.
5) The OMA failed to deliver on earlier commitments to gather data in an objective and verifiable manner, specifically through direct observation of physician work activities.
6) The OMA failed to commit adequate funds and resources to ensure accurate studies and determination of physician income, overhead, and work hours.
7) The OMA Board engaged PricewaterhouseCoopers to carry out studies which it knew or should have known would be inaccurate and unreliable.
8) The OMA Board either failed to negotiate an appropriate contract, or fails to enforce the terms of the contract, with PricewaterhouseCoopers in a way that would ensure physicians obtained valuable and accurate data in the work that PricewaterhouseCoopers was engaged to carry out.
9) The OMA Study of Income, Overhead, and Hours Worked (“The Study”) was carried out as a self-reported survey, without verification of the data submitted.
10) The subsequent Site Visits failed to validate the data submitted.
11) The Study failed to yield an adequate participation or sample size to draw meaningful conclusions.  One quarter of all Sections analyzed had data from only a single physician.
12) Some of the data in The Study can be specifically and objectively shown to be inaccurate.
13) Much of the other data in The Study does not pass the “sniff test”.
14) PricewaterhouseCoopers has misled OMA Council, or exaggerated to OMA Council, regarding the reliability and accuracy of a number of elements of The Study.
15) The OMA owes physicians a concerted, well-resourced, and reliable effort to deal with relativity issues.  The current effort has failed to advance improvements in relativity, and threatens to create other inequities.

 

 

Below is an excerpt from the OMA website which outlines the Relativity history and the efforts of the OMA.

CURRENT: In the latest tentative Physician Services Agreement (July 11, 2016), the OMA and MOHLTC promises that the problems with the relativity issue will be addressed: "To manage the PSB (Physician Services Budget), there will be a modernization of the OHIP Schedule of Benefits (SOB) and other payments, with $100 million in permanent reductions in each of fiscal years 2017-18 and 2019-20, and based on relativity and appropriateness, through the co-management process."

The Post-2015 Review (2015 - )
At the end of abeyance period, the Council tasked the Economics, Research and Analytics (ERA) department to conduct a technical review of the CANDI methodology.  At its Fall 2015 meeting, the Council further tasked the ERA to conduct a survey of Section leadership on their views of relativity in general and to solicit written input from all Sections on the current CANDI methodology.

The Abeyance Period (2012 - 2015)
At its Spring 2012 meeting, the Council passed the motion “that a review of the CANDI Relativity Methodology be initiated in three years.”

The CANDI Relativity Implementation Committee (2009-2012)
The mandate of this committee was to implement the recommendations from the RVIC Working Group, which included the initiation and oversight of research studies on income, overhead, and hours of work.

The RVIC Methodology Review Working Group (2008-2009)
The mandate of this working group was to review the RVIC methodology.  This working group developed a new methodology known as the Comparison of Average Net Daily Income (CANDI) that was approved by Council at the fall 2009 meeting, and is currently used for relativity purposes.

Source: OMA Relativity (gated); OMA tPSA (gated)

Source: OMA