BILLING CORNER


Relativity

  • Methodology
  • Correspondence w/ OMA
  • History
  • Resources

Current CANDI Relativity Methodology

Gross Daily Income (GDI): The current CANDI methodology is based on the notion that the relativity between OHIP specialties can be determined by comparing an appropriately adjusted net clinical income per day. Specifically, the methodology starts with the gross daily income (GDI), defined as the average daytime (7AM-5PM), weekday (Monday to Friday) fee-for-service billings per day per physician for each OHIP specialty. This definition excludes weekends, holidays, days with no billings, and after-hour billings during weekdays. The GDI is then adjusted by a set of six modifiers to account for the non-fee-for-service sources of income, frequency of billing, hours of work, overhead, and years of training.

Six Modifiers:
1. The Non-Fee-For-Service Modifier adjusts the GDI (which comprises only the professional fee-for-service payments) to also include non-fee-for-service clinical sources of income, such as alternative payment plans (APP), primary care payments (e.g. Capitation and Comprehensive Care Management fee), Hospital On-Call (HOCC) funding, Workplace Safety and Insurance Board (WSIB) claims, non-fee-for-service psychiatric payments (e.g. mental health sessional fees and psychiatric stipends, Assertive Community Treatment, Divested Provincial Psychiatric Hospitals, Ontario Psychiatric Outreach Program), and Paediatric Hospital Stabilization Fund.
2. The Per Diem Modifier accounts for the fact that some fee-for-service codes are paid for services rendered over an extended period of time (e.g. week, month, or year).
3. The Hours of Work Modifier accounts for differences in the actual hours of work during the daytime weekday period,
4. while the Overhead Modifier adjusts the GDI for expenses that physicians incur for earning their clinical income.
5/6. The last two modifiers, the Opportunity Cost Modifier and the Skill Acquisition Modifier adjust the GDI for foregone income and the value of additional skills, respectively, because of differences in the length of medical training between OHIP specialties.

Adjusted Daily Income: The GDI, adjusted for these six modifiers, is known as the Adjusted Net Daily Income (ANDI) and is calculated for each OHIP specialty and for specialists as a group (the target ANDI). The relativity position of each OHIP specialty is then determined as the ratio of the specialty’s ANDI to the target ANDI. The most recent CANDI results (Comparison of the Average Net Daily Income), for fiscal 2014/15, are presented in Table 1.


October 15, 2017
Re: OMA Relativity Review Initiative Section Consultation Q&A - Section on General Surgery
The RRC has reviewed previous submissions, specific to your section, since 2009 up to October 2017 and have following questions regarding your section’s concerns. RRC is looking for solutions and recommendations for the issues that you have raised in your previous submissions. Please provide answers to the following questions in your presentation.... (The Section on General Surgery sent 3 reps - Drs. C.Vinden, A.Wei, P.Dauphinee - to meet with the OMA Relativity Review Committee on October 16, 2017.)

OMA Q1: In your submissions, you have identified the need to adjust for complexity, intensity and risk in a relativity model. Please explain how would you specifically make these adjustments and what data would be required to make these adjustments?
GenSurg A1: There needs to be a substantial range in the value of the complexity and intensity modifiers. The combination of these modifiers when applied together needs to be large enough to bring down an average physicians income to the level of a nurse practitioners income based on the assumption that low complexity, low intensity tasks can be delegated to a nurse practitioners and a physician that just did tasks that could be delated to a nurse practitioner should earn an equivalent income adjusted for expenses, pension etc. A nurse practitioners ANDI is about $500 per day and the average physicians ANDI is $874. This implies that the range of complexity and intensity modifiers need to range from 0.8 -1.2 such that the combination of a low complexity and low intensity modifier is appropriate.
Complexity:
There are 2 major components which should be included in a complexity modifier: 1) Patient complexity: Patients on multiple meds and with multiple problems are much harder to look after, and 2) The complexity of the health problem being addressed: This relates to the complexity of decision making or the intervention and the training and experience required.
1) Patient complexity: In health services research comorbidity indexes are used to adjust for patient complexity. Historically the Charleson Comorbidity Index was used but more recently the Johns Hopkins' Aggregated Diagnosis Groups a methodology that has been validated in Ontario is commonly used. This method of case-mix grouping captures all morbidities for which a patient receives care during a defined period, typically within 3 years before the intervention date. The ACGs can be collapsed into 6 resource utilization bands (RUBs) on the basis of expected use of health care resources. The Institute for Clinical Evaluative sciences (ICES) could be approached to provide specialty specific estimates for the average complexity of patients by specialty weighted for the $ value of the intervention. This would provide a scientifically valid and robust estimate of patient complexity. I am an ICES Scientist, and am familiar with the databases. I would be happy to facilitate such an analysis done through ICES Western.
2) The complexity of the health problem being addressed: Measurable proxies for complexity are relatively simple for surgical procedures but are challenging for non-procedural specialties. They require value judgements and are hard to quantify especially across specialties and the profession. Likely the simplest proxy for complexity for the specialty as a whole is the CAPER years of training. For the last 50 years we have accepted a system wherein a surgical assistant for simple cases is paid at about 30- 40% of a surgeons fee. This would suggest that a complexity modifier needs to have a range that reflects this degree of variation if applied at the fee code level. The current skills acquisition modifier provides some degree of adjustment but the range is far too small to truly reflect variation in complexity. Expanding the skills acquisition modifier to 6% per year of additional training would provide a more realistic range when used in conjunction with risk and intensity modifiers. We would therefore recommend that a complexity modifier be based on average patient complexity as measured by the Johns Hopkins' Aggregated Diagnosis Groups methodology and that the skills acquisition modifier be increased to 6% per year of training using CAPER data.

Introduce an Intensity Modifier:

Intensity is a measure of the stress, cognitive effort and concentration associated with a task. It is also value judgement and is difficult to quantify. For all physicians there are low intensity and high intensity aspects of those jobs, but it is unequivocal that some specialties have far higher average intensity than others, and Ontario just like the rest of the world should compensate them appropriately. Over compensating groups that have low intensity is not a solution, It is just another problem, and a major problem that has significant long term ramifications for the OMA and for the profession.
The most common fee codes in each specialty should be classified by each specialty and then reviewed by a multispecialty panel. They should be assigned an intensity score on a five part scale and then a weighted average intensity be determined for each specialty.

Proxies for Risk:
Risk is likely best quantified by malpractice premiums. The US uses such a model and assigns a malpractice liability component to each fee code. Two potential options for a risk modifier would are:
1: CMPA premiums at the specialty level. This would require disclosure of CMPA practice type from the Ontario College and then OMA economics could do an analysis based by section of malpractice dollars per dollar of income by specialty group. It could be done without college disclosure based on assumptions but there would be some slight inaccuracies based on part time practices.
2: Use United States practice Liability relative value units (PLI-RVU). This would require mapping the OHIP fee codes to US CPT codes and then using aggregate specialty PLI RVU per dollar billed by specialty as a proxy. This would give a much more granular assessment of risk Note: If an intensity and risk modifier are NOT introduced then the Skills acquisition modifier should be expanded to 8% per year.

OMA Q2: Your section has suggested a mechanism to quantify after hours work, besides using tracking codes (which involved MOHLTC cooperation) what other methodologies could the OMA use that is independent of the MOHLTC to accurately quantify after-hours work?
GenSurg A2: The section does not understand this question and whether it is separate from question 4. After hours work is already quantified by the CANDI methodology using fee codes. (i.e. table #4 in CANDI methodology 2009)
This would be very difficult to measure, almost impossible and is one of the reasons why CANDI is flawed to begin with. For example dictating at the end of the day in order to optimize the efficiency of a clinic or OR day. Reviewing lab work online in the evening. It is difficult to envision an auditing methodology that could achieve this without having 24 hour monitoring of a sample. This is improbable.

OMA Q3: Your section has suggested that a mechanism include "charting and admin" work which is not accounted for in CANDI. How can "charting and admin" be accounted for in relativity? What evidence-based mechanism could be implemented as a fair and equitable solution for all member?
GenSurg A3: This is one of the inherent flaws in CANDI that this is almost impossible to measure consistently and accurately. We have no suggestion about how this is done. Shifting from an hourly or daily basis to an annual basis and only assessing true full time physicians will mitigate this issue to a large degree and would be our preferred approach.

OMA Q4: Your section has suggested a "burden of call" modifier. How can the percentage of work done by each specialty after hours be accurately accounted for? What mechanism would be used to create a fair and equitable evidence based "burden of call" modifier?
GenSurg A4: Burden of call is significant and varies considerably by specialty. It impacts significantly on quality of life, work life balance and family life. It plays a dominant role in the low participation rates of women in some specialties, especially surgical specialties. Even after 20 years of gender equality at the medical school level there is still large variation by specialty. In family practice 63% of those under 40 are female while for high burden of call specialties it remains low, Burden of call is not simply out of hours work, though that is important, but also encompasses the complete unpredictability of life when on short leash call and responsible for life threatening emergencies. For example a daytime emergency which would not be captured by current CANDI methodology, means that kids cannot be reliably picked up from daycare or school and possible alternative solutions, usually expensive must be always available even though such events happen infrequently.

Simple, measurement of income out of hours does not capture the variability between specialties nor the fact that much of the burden of call still occurs during daytime hours. The fact that almost no one is lining up do call suggests that the on call premiums are significantly undervalued. In jurisdictions where on call is very well remunerated (eg Winnipeg general surgery, there is competition to take call).

On call work typically usually uses the same fee codes as elective work but is far more complex. For example emergency gall bladder and colon surgery are far more difficult, have mortality rates several times higher yet are paid using the same fee codes as elective work. In many situations the out of hours premiums do not even compensate for the extra time associated with emergency work such that hourly income may well be less than during daytime hours.

There are however reasonable proxies that correlate well with physicians impressions as well as with the “whose cars are in the physicians parking lot at night test”. The following proxies provide an opportunity for a composite scoring system that could be used to create a Burden of Call Modifier:
A: After midnight work: Volume of work done after midnight and before 7AM is a proxy for very urgent high stress clinical situations, though it really only accounts for 7hr/24 hr = 29% of such situations.
B: Volume of Weeknight work: This generally represents urgent clinical situations that cannot wait till the next day.
C: Weekend and holiday work: This generally represents less urgent situations that can wait till the next day but not until Monday. Some specialties have almost no weeknight work but moderate weekend work. Their burden of call is less than those with weeknight and after midnight work.

To convert these proxies to a Burden of Call modifier, we would recommend a composite scoring system that would intrinsically value the most urgent situations more heavily:
1 / (Total Specialty Income + (A x(24/7) + 2B + C)/ Total Specialty Income)
Note: Shift work such as Emergency medicine and Obstetrical fee codes would have to be dealt with by an alternate mechanism.

OMA Q5: Gross Income can be measured on an annual, weekly, daily, or hourly basis. Which of these options do you believe is the most appropriate to use in a relativity model and why?
GenSurg A5: We would advocate for measurement on an annual basis, but only considering true full time physicians. The main reason for this preference is that it eliminates the inaccuracies in the current hours of work modifier. Daily billing is problematic, as there are many part time physicians. Weekly billing is similarly problematic for part time physicians with different partial or full day practices. Hourly billing is problematic as there is a wide distribution of times per fee code and measuring them on a consistent bases would be nigh on impossible.

We would strongly suggest that the measurement of gross annual income be restricted to only full time physicians with no attempt made to determine hours of work per day. The distribution of physician’s income is NOT a normal distribution but has a heavy initial tail caused by multiple part time physicians. This skews any attempt to derive average income from means or medians. However if strict definitions are used to define full time physicians then based on the assumption that they are working all day then the hours of work issues and huge part time effect can be avoided.

We would suggest the following selection criteria for analysis of full time specialty income. Select an initial cohort by specialty with the following criteria.
• Age 38 -55; Eliminates most semi retired physicians and eliminates those still building their practices.
• Bills >220 days per year (may need to be adjusted and use a threshold of billings per day)
• Bills at least 75% of mean specialty income
• Has significant billings for unsociable hours. Suggest within one standard deviation of the fulltime specialty average.

This cohort would have a much more normal distribution of income. The top and bottom 15% should be truncated as they may have skewed practice pattern that accounts for their income levels The remaining cohort would be used to determine the average gross income.

Sensible approaches need to be taken for fee codes such as dialysis and post op care. The 50/168 multiplier is inappropriate for dialysis. The surgical fee code represents 2 weeks of care including after-hours care and a portion of the surgical fee codes need to be discounted by a multiplier to account for that. The 50/168 multiplier would be too large an effect and a reasonable alternative should be sought. (An analysis by ICES based on average length of stay, ER visits, and travel codes within 2 weeks could provide a scientifically valid estimate. I would be happy to facilitate such a study through ICES Western)

OMA Q6: Two main methods to estimating overhead costs are by collecting data on actual overhead expenses from physician practices (e.g. surveys or office audits) and by estimating the overhead costs for a model physician office (the cost of running an office that reflects the typical space, personnel, equipment and supplies that a physician would require for patient care). Which of these two options do you believe is the most appropriate to use in a relativity model and why?
GenSurg A6: We strongly prefer the model physician’s office rather than audits and surveys. We are concerned that personal preferences for office environments could dominate especially in those subspecialties that have significant non-ohip income. We realize that a cosmetic surgeon’s or dermatologists office may well be appointed differently than a general surgeon’s office but that such differences are preferences contrived to improve market share of cosmetic surgery and are not medically necessary.

Some General Surgeons and Gastroenterologist are in the unique position of providing endoscopy in private settings without a compensating technical fee. Over 110,000 private endoscopies are done in this manner with clinics typically claiming 50% of procedural fees. Such costs need to be accounted for in whatever model is used. For general surgery a rough estimate is that such fees account for about 15% of all expenses paid by our section. If the mechanisms for capturing these expenses does not account for this situation it will have significant impact on our section.

Lost Opportunity Costs:
In previous submission we have argued that the methodology of the Lost Opportunity Cost modifier needs to be reviewed with amortization factors based on realistic rates of return and net to net comparisons. We wish to add and emphasize that the lost opportunity cost modifier is not merely a fiscal lost opportunity. For some arduous residencies it is also the lost opportunity for having a family or bonding with your family. For some couples it is about being apart for years. And for many residents it means living in a city not of their choice separated from extended family. These years are in the prime of life and for some women are close to the end of their potential childbearing years. These intangibles are obviously hard to quantify or monetize but we would suggest that an increase in the skills acquisition modifier of 1% might be reasonable start.

Our preferred approach to Relativity:
Relativity is determined at the specialty level not at the fee code level. The sections are enabled to modernize the fee schedule, make fee code adjustments and introduce new fee codes to meet the aggregate sectional target income. If they do not want to participate in that process, then across the board multipliers are used to meet their sectional relativity targets.

Relativity is achieved rapidly with both positive and negative adjustments such that relativity within is achieved within 5 years. The relativity formula is reassessed every 4 years to prevent gaming. Specialty relativity targets are based on daytime annual income of full time physicians only. Specialty Targets are adjusted for:
• Risk: Based on CMPA premiums
• Expenses: Based on modelled rather than self-reported expenses
• Complexity: Based on Johns Hopkins' Aggregated Diagnosis Groups and by increasing the skills acquisition modifier to 6%
• Intensity: based on categorization of top 30 billing codes in each section
• Burden of Call: Based on composite score derived from non-daytime billing data
• Lost Opportunity Cost: CAPER year of training with amortization factors based on realistic rates of return and net income of both residents and GP’s.

Specialty relativity targets would be adjusted annually to account for population growth and the ageing population:
Example: Specialty A has an adjusted net annual elective income that is 12 percent above average: The annual daytime billing of that specialty are 300 million. The section therefore has to make fee code adjustments such that will result in savings of 36 million. These are phased in over 4 years so in the first year they would have to make fee adjustments to save 9 million. The section tariff committee would recommend the fee code adjustments. If the targets are not met then a claw back would be enabled. If targets are overshot then a credit would be enabled.
September 11, 2017
Re: OMA Relativity Review Committee Survey - Section/OAGS Complaint on Facilitation of Relativity Survey
September 11, 2017

Attn. OMA Relativity Review Committee
c/o Dr. Del Dhanoa, RRC Chair

Ontario Medical Association
150 Bloor St. West, Suite 900
Toronto, Ontario M5S 3C1
Email: relativity@oma.org

Re: Complaint on Facilitation of Relativity Survey

Dear Relativity Review Committee:


The Section on General Surgery recently sent out a letter to all of its members strongly encouraging them to participate in the OMA’s Relativity Review Committee (RRC) Survey regarding work hours, call and overhead, as per your request.

Please note that our Section Executive is openly opposed to using grossly biased, self-reported and unvalidated data for the distribution of a $12 billion Physician Services Budget. That being said, we felt strongly that if this was the OMA’s way of collecting the data, then we wanted as many General Surgeons as possible to contribute and report accurately on the data requested. One of the greatest flaws with the original CANDI calculations was the fact that many of the calculations were based on ridiculously low sample sizes (again, apart from the inherent bias in self-reported data).

We were much chagrined to learn recently that a number of our members had not received the relativity survey at all! This includes current District delegates and District executive members! It was our understanding that this survey - at least in an attempt to collect robust data - was going to be sent out to ALL members of ALL sections.

When we questioned OMA staff about this, they informed us that the survey went out via the “ThoughtLounge Platform”, allegedly reaching 27,000 members. Most members do not recognize the "Thought Lounge" as an official method of communication for the OMA, but rather, as OMA staff has confirmed, a voluntary E:panel for general information gathering and input from subscribed members. As such, if members were unsubscribed from the ThoughtLounge, declined OMA surveys or had outdated email addresses on file, they would not have received it. It was suggested by OMA staff that if members hadn’t received it, they could re-subscribe. This seems like an inappropriate response to our concerns.

We can all agree that there are surveys of interest and then there are surveys of significant importance sent out by the OMA. The results of this relativity survey are presumably going to be used to update CANDI or some variation thereof - a VERY important issue for our Section and arguably the profession as a whole. Why then, would the OMA seek input on such a far reaching and significantly important survey via what may be perceived by most as a “ThinkTank” email group platform, which might be easily ignored or deleted? This survey deserved far more prominence and attention. The issue of relativity for our and many other sections is far more important than binding interest arbitration (BIA), and yet we were inundated last spring with emails from the OMA directly to obtain input and to urge members to vote on BIA. This relativity issue should have been treated with the same, if not more, importance than the BIA vote.

It is very disappointing to see that despite claims of rebirth and commitment to OMA members’ needs and concerns within the organization, that something as significant as this long-anticipated relativity data collection survey has failed to reach the people it needs to reach.

Please remedy this situation immediately, as the September 17 deadline for response to the survey is less than a week away. We would suggest resending the survey as a generic link from the usual OMA email communiques that can reach all registered OMA members, or in this case all fully practicing members.

Sincerely,
Dr. Alice Wei, President
Ontario Association of General Surgeons

Dr. Chris Vinden, Section Chair
OMA Section on General Surgery

cc. Dr. Sahba Eftekhar, Senior Lead, Special Projects, ERA & Technology
cc. Simone Noble, Senior Project Manager, Survey Insights, ERA & Technology
cc. Dr. Shawn Whatley, OMA President
September 7, 2017
Re: OMA Relativity Review Committee Survey - Section Concerns on the State of Relativity and Current CANDI Premise

Attn. OMA Relativity Committee
From:  Section on General Surgery
Date:  September 7, 2017
Re:  Section Concerns on Relativity Issues

  
Dear Committee Members:
  
The underlying premise behind CANDI (Comparison of Average Net Daily Income) is that physicians should be paid the same for a day’s work regardless of complexity, intensity or skill required for that job.  We categorically reject this premise. 

It is absurd, and we can find no similar examples in medicine or any other professional group in any democratic society.  We do not think it applies even within our own specialty.  Under fully implemented CANDI, the surgical assistant should expect to make 88% of what the surgeon’s fee is!  A physician doing “well baby checkups” or “hypertension follow-up clinics”, tasks that are often delegated to nurse practitioners, would expect to be paid almost the same hourly rate (88%) as a hepatobiliary surgeon doing a Whipple or liver transplant.  We feel that this concept is wrong on many fronts -  logically, ethically and even  morally in a publicly funded system.  It is the antithesis of relativity, which CANDI seeks to correct.  It basically distorts relativity further and represents the maximal overcorrection of the current state.  The current problem is not that all specialties are not paid the same but rather that they are not paid the same for work of equivalent intensity.  We acknowledge that there are many examples of fees that are currently overpaid and underpaid, but flattening them all so that they are all paid essentially the same is just another huge distortion that will create new instability in the system. 

CANDI significantly threatens the long-term viability of those specialties that do low intensity work.  While in the short term they will benefit hugely, a lot of their work can be delegated to allied health workers.  As such, any rational and fiscally responsible government would be expected to shift low intensity work away from physicians, if they insist on being paid at the same rate as a specialist providing complex high-intensity services. This is already happening in primary care with a proliferation of over 25 nurse led clinics in the last few years.  CANDI is essentially pricing family doctors out of their own market.  Is this really what family doctors want?  The OMA needs to show some big picture, long term thinking leadership on this issue.  CANDI will make family practice unsustainable.

We are, however, well aware that the governance of the OMA is dominated by family doctors who make up almost half of the OMA.  So, the probability of them agreeing to less than fee equality with specialists is low.  We feel that in the long term, the path taken by other large provinces such as BC and Quebec where there are separate negotiations for specialists and family doctors is in the best long term interests of the profession and should be sought after.

We would also direct members of the OMA Relativity Committee to review the very first treatise on economics ever published - The Wealth of Nations by Adam Smith published in 1776.  In particular, we would recommend reading chapter 10, part 1: Inequalities arising from the Nature of the Employments Themselves, page 83 via the following link https://www.ibiblio.org/ml/libri/s/SmithA_WealthNations_p.pdf  (or alternatively https://goo.gl/dmzpjj

Given that the possibility of CANDI being rejected and replaced is politically unlikely, our section pragmatically suggests the following modifications to CANDI.

Hours of Work Modifier: 
This is essentially the foundation of CANDI but we acknowledge that it is incredibly hard, if not impossible, to measure fairly and accurately.  This reason alone is sufficient to suggest that an alternate methodology to CANDI be sought.  Self-reported surveys should not be the basis for dispersal of $12 billion per annum.  One of our main concerns is the very large number of physicians with part time practices and the inability of the CANDI methodology to capture it.  Income distribution in almost all specialties does NOT follow a normal distribution but has a very long lower tail full of part timers.  In most specialties, 50% of the physicians do 80% of the work, and the other 50% do only 20% of the work.  In the current system, a physician who takes half days off (a very common situation) has a very different impact on the CANDI modifier when compared to a physician who works the same number of hours part time by working fewer full days - also a common situation.  We are highly skeptical of the ability of a survey to capture the extent of part time work.

Overhead Cost Modifier:
We strongly recommend that unvalidated, self-reported survey data NOT be used for determining overhead cost.  The majority of expenses consist of secretarial labour, physical space rental, and insurance .  We recommend that a “virtual model office” be created for each specialty and that a 3rd party consultant provide provincial average costing data to determine the average cost of overhead expenses.  Several specialties have significant private income (plastic surgery, dermatology and ophthalmology) and overhead costs that generate private income and should not be ascribed to the CANDI model.  We would remind the Committee that the variance in this modifier is much larger than other modifiers ranging from .58 to .91 such that this modifier can have a dramatic effect.  For example, the difference between 2 surgical specialties such as ophthalmology and general surgery is 12%.  This is equivalent to the value currently ascribed to three extra years of training.

Opportunity Cost Modifier:
 We have multiple concerns about the methodology used.
1. Rate of Return:  The opportunity cost modifier is designed to compensate specialists for the difference between their incomes while doing their training as residents compared to what they would earn if they were in the workforce as GPs.  The calculations used an amortization rate based on “real rate of return” Canada Savings Bonds.  The value for this used in CANDI is 0.63%.  We strongly suggest that this rate reflects neither the average investment portfolio nor the borrowing costs of an average practicing physician.  The average 30 year return on equities or the average rate of a 10 year mortgage rate would be a more appropriate rate that is congruent with the investment and credit options used by most physicians.  Using a reality based amortization factor would make this a much more accurate modifier.
These values represent neither the borrowing cost of an average physician in debt nor the expected investment return of an average physician saving for retirement, and are thus NOT an accurate representation of the true opportunity cost.  The vast majority of physicians’ pre-retirement savings are invested in equities and we would suggest that the average inflation adjusted 30 year return on equities (S&P500 or TSE ) be used to calculate the amortization factor. Alternatively, the average return on a 10 year fixed mortgage generally a similar value could be used.  This would significantly decrease the amortization factor.

2.  Net vs Gross :  We have significant concerns over varying use of net and gross incomes in the formula.  The formula is based on the difference between GP income and resident income to calculate the lost opportunity cost.  However, it uses NET income for GPs and GROSS income for residents.  This seems patently unfair.  Residents also have significant expenses related to their work such as tuition, books, software, computers, student loans, car expenses, travel to conferences, cost of exams, licensure and malpractice. We feel that a more fair comparison would be net to net.  

We also have concerns that gross rather than net income is used to determine the final percent of LOC.  If net is used in the initial calculations, it seems illogical to use gross income to determine the LOC modifier.  A specialty with identical years of training and identical net income to another specialty would be at significant disadvantage if their expenses were much higher.  The overhead cost modifier has already corrected for the varying expenses and using gross rather than net effectively counts overhead expenses twice in an unfair way.

3. Full time comparators:  We would seek confirmation that the figures used to calculate the difference in income between a resident and a family doctor are up-to-date, accurately reflect  non fee for service income,  and  since all  residencies are full time jobs, that only GPs with full time positions are in the comparison.  Our impression is that the difference of <$80,000 per year in income understates the true income of new graduates.

Per Diem Modifier:
The vast majority of surgical fee codes pay for a 2 week episode of care that includes all surgical visits.  We would argue that if chronic dialysis fees are prorated on a 24 hour basis, then surgical fees should also receive similar treatment.  We realize that this would be a huge distortion and would cripple the whole process but would suggest that an alternative approach be  explored and used both for surgical episodes of care billing as well as dialysis.  Night and weekend management of post op surgical inpatients is a significant burden that is currently inaccurately attributed to daytime income in CANDI .   Options would include unbundling and then prorating only the post op care.  We would also argue that MRP codes be prorated by 50/168. 

Skills Acquisition modifier
This is the only part of CANDI that acknowledges a true difference between the income of GPs and specialists, and unless additional modifiers are introduced that reflect work intensity, risk, and complexity, we recommend that this modifier be significantly increased.  All other countries including communist Cuba and progressive European democracies pay their specialists substantially more than GPs.  In Cuba, specialists are paid 70% higher and in the Netherlands they are paid more than double.  The current 12% difference is insufficient.  If 4 years of medical school doubles income compared to an average undergraduate degree, why would an additional 5 years of a grueling surgical residency result in tiny further increment.

We recommend that the skills acquisition modifier be increased to 6.6% per year of additional training.  This would put family doctors’ equivalent hourly rate at 80% of most specialists, which was a historical long term goal of the OMA and is still far higher than almost every other country.

We also strongly recommend that the skills acquisition modifier be based on actual years of formal training rather than minimum.  Many academic and some large community hospitals require Royal College subspecialty training that is not reflected in the CANDI methodology.  Cancer Care Ontario requires hepatobiliary fellowships or surgical oncology fellowships for access to surgical resources in those fields.

General Surgery is the home to many highly specialized subspecialties that have significant extra training requirements. Colorectal surgery and Pediatric General Surgery are formal Royal College subspecialty training programs with additional 2 years of training.  Transplant Surgery, Hepatobiliary Surgery and Surgical Oncology are all Cancer Care Ontario required subspecialties with 2 additional years of training.  To be fair to these groups, they need to be considered separately from General Surgery with regards to the skills acquisition modifier.

Introduce an Intensity Modifier:
We recommend the introduction of an intensity modifier.   For all physicians, there are low intensity and high intensity aspects of those jobs, but it is unequivocal that some specialties have far higher average intensity than others, and Ontario just like the rest of the world should compensate them appropriately.  Over compensating groups that have low intensity is not a solution.  It is just another problem, and a major problem that has significant long term ramifications for the OMA and for the profession.
The most common fee codes in each specialty should be classified by each specialty and then reviewed by a multispecialty panel.  They should be assigned an intensity score on a five part scale and then a weighted average intensity be determined for each specialty.   Use of this modifier would mitigate the need for an inrcrease in the skills aquisition modifier.

Introduce a Risk Modifier:
We recommend the introduction of a risk modifier using malpractice premiums as a proxy for risk.  Risk is a different and separate entity from complexity and should be dealt with separately.  While the skills acquisition modifier reflects complexity, there is poor correlation with risk.  Some specialties with long training periods have low malpractice premiums based on minimal risk or possibility of poor outcomes.  Other jurisdictions, most noticeably the USA, use a malpractice component to their relative value fee schedules.  A modifier that is an inverse function of the malpractice premiums would improve relativity.

Introduce a Burden of Call Modifier:
Physicians in short leash specialties make significant sacrifices in family and social life compared to physicians with minimal call obligations.  HOCC dollars and on call premiums in no way compensate for the impact of call.  While many physicians seek access to daytime resources, almost none seek access to on call work, hence suggesting that on call work is significantly undervalued.  We would suggest a burden of call modifier based on the percentage of work done by each specialty in unsociable hours would be a reasonable proxy and is easily calculated.

Alternatives to CANDI:
We would ask the Relativity Committee to utilize the collective wisdom of many other Canadians by surveying the schedule of fees of the other major provinces and devise a national average fee schedule.  This would be a relatively simple way and would almost certainly improve relativity, as many other jurisdictions have gone through this process.  

Implementation:
Regardless of the relativity model chosen, we would strongly advocate for it to be implemented quickly over 3 to 4 years.   There will need to be significant negative adjustments to some specialties.  The historical model used by the OMA of never reducing fees but allowing underpaid specialties to gradually increase is untenable, as it would take a generation and doubling of the health budget.  The historical concern for overpaid specialists to not face a fee cut is absurd.  That concern should be redirected to those that have been undervalued for years.

We would also strongly recommend that relativity be applied at the section level, not at the fee code level.  Relativity should be used to define the total fee allocation to each section and then that section can decide how to divvy up their section’s allocation by tweaking their own fee codes.  Even if no method of achieving relativity between sections is reached, we would strongly advocate for the ability of sections to do their own revenue neutral adjustment for their own fee codes.

Respectfully Submitted,
Chris Vinden 
Section Chair on General Surgery

CV/lq

September 5, 2017
Re: OMA Relativity Review Committee Survey - Instructions for Members on How to Complete the Survey

Attn. OAGS Members & General Surgeons of Ontario
From:  Ontario Association of General Surgeons
Date:  September 5, 2017
Re:  OMA Relativity Survey

Dear Colleagues,
  
The Ontario Medical Association is undertaking a much-needed review of relativity.  However, we have significant concerns about the process.  They have undertaken a survey to get information about hours of work and overhead expenses, which are significant components of the CANDI formula.  CANDI (Comparison of Average Net Daily Income) is the method used for deciding which specialties are targeted for fee increases and fee cuts.  Each of you should have or soon will be receiving the survey request by email from the OMA.

The OMA Section on General Surgery and the OAGS executives have significant reservations about this methodology being used to establish the relativity of fees for each specialty.  In particular, the use of unverified self-reported data on the number of hours worked per week cannot be a way for a $50 million organization to decide on the disbursement of $12 billion per annum.  Self-reported data is scientifically unsound – the data can easily be gamed for advantage or contain biases due to different response rates from different specialties.

We lobbied that this survey not go out, and we recommended more scientifically rigorous and robust measures of work hours and overhead costs be sought.  We feel that self-reported data is unreliable and highly likely to be gamed, since the stakes are very high.  When a similar process occurred in 2011, the results lacked face validity.  The data did not pass any sort of reality check. 

General Surgeons have a pretty good idea of whose cars are in the physicians’ parking lot at the beginning of the day and whose cars are the last to leave.  There is little doubt that the last survey results did not reflect our true work hours. According to the last survey, General Surgeons work the least of all surgical specialties!  In that survey, each minute of extra time worked per average day would have corresponded to about $1,000 per year in income, and an extra hour would have increased income by 15% had CANDI had been fully implemented.   As a Section we are completely opposed to the CANDI methodology.  We consider it unfair and inappropriate.  Nevertheless, we need to be practical. As many specialties win big with CANDI, it is unlikely to go away.  So, as we continue to lobby for a better method of establishing relativity, we should still participate in the process for the time being.

THEREFORE, WE RECOMMEND THAT GENERAL SURGEONS OF ONTARIO COMPLETE THE SURVEY.  It may be the only opportunity for the time being to speak out about our work hours.  It is unclear whether this survey data will ever be incorporated into a funding model or updated CANDI methodology.  Nevertheless, we encourage full time General Surgeons to participate and fill it out accurately, but also be mindful and fastidious that every minute may count. 

Page 2…

It is important to include in your work hours all clinical activities, not just hours involved in direct patient contact.  For example, address administrative work such as:  billing, dictating, reviewing test results, patient correspondence that is done at the end of the day or in the evening should be included.  Test results are often reviewed at later times and routine hospital visits are often done outside the CANDI definition of a day which is 7AM to 5PM.  Travel time between different hospitals or hospital sites are also part of the clinical workday.   Please ensure that you include all time spent in all aspects of clinical care in your survey responses.  Also make certain that all overhead expenses are fastidiously included.

We will continue to lobby that the CANDI model will be rejected and a new methodology that respects complexity and intensity of work be adopted. 

For your review, we have also attached on the next page an excerpt of the last relativity survey results (2011).  Please note that despite very low participation (many sections had single digit participation), that alone would have rendered any academic survey results uninterpretable due to poor science.  Nevertheless, the results were used and were an integral part of the CANDI formula over the past 5 years!!

Other Sources:

Sincerely,

Dr. Alice Wei, President
Ontario Association of General Surgeons

Dr. Chris Vinden, Section Chair
Section on General Surgery

CV/AW/lq

March 11, 2016
Re: CANDI Review - Correspondence from Section on General Surgery to OMA Economics, Research & Analytics

March 11, 2016.

Kate Damberger
OMA Economics, Research & Analytics

Dear Kate,

RE: CANDI Review

The Section on General Surgery offers the following feedback and suggestions for the review of relativity methodology and data.  We have previously made a number of submissions to OMA reviews regarding CANDI, and hope that those will also form part of the review.

  1. We continue to be concerned that work intensity and complexity is not a variable for relativity determinations.  We believe these issues are important for achieving a fair fee schedule which compensated those performing more complex clinical work at a higher rate.
  2. We feel strongly that actual years of training rather than minimum years of training should be used in CANDI calculations.  For example, data on recent general surgery graduates reveals that the average years of traning is 6.8, while CANDI uses a value of 5 years.  Such information should be relatively easy to determine, and could be recalculated every 5 years or so as it is unlikely to change significantly over short periods of time.
  3. Years of training is particularly relevant for the Section on General Surgery as many of our members are subspecialists and have required significant extra subspecialty training.  For example, hepatobiliary surgeons, colorectal surgeons, pediatric surgeons, and surgical oncologists are members of the General Surgery Section, and these individuals typically require a minimum of 7 or 8 years of training, rather than 5 years.  This is an issue of basic fairness for all Sections in this process.  For example, medical subspecialties have their own Sections, and are not restricted to the 4 years of training in the formula which is the minimum for internal medicine.  Similar equity needs to be extended to the Section on General Surgery which “houses” a number of surgical subspecialties.
  4. A reasonable approach needs to be developed to address billings from fee codes which compensate for 24 hours of care, where the vast majority of the care is delivered during typical weekday daytime hours.  For example, dialysis fees are discounted to 50/168 hours, even though the vast majority of the physician care is typical daytime work.  Under that analogy, all surgical fee codes should be reduced to a similar fraction as they include bundled post-operative care for most of the 2 weeks post-operatively.  Thus, one could conclude that only 30% of the fee is for the daytime weekday care, and 70% of the fee should be discounted as it represents after-hours and weekend work.  We need to develop a rational mechanism to address this issue in an equitable manner.  The currently used per diem modifier is not equitable and is not an appropriate mechanism to determine allocation of certain fees to daytime versus after-hours work.
  5. The current opportunity cost modifier disproportionately rewards higher earning specialties, and thus distorts relativity further.  We need a consistent modifier for additional training that treats all Sections equitably.
  6. Analysis should be performed of the typical career span for each specialty, including family practice.  It is likely that in certain specialties, most likely surgical specialties, one cannot continue a full practice beyond a certain age, and the career span may be more limited, with the individual semi-retiring into less remunerative practice.
  7. A better appreciation should be developed for part-time work, in particular to compare compensation when physicians only work a part-day rather than a full day.  Many “full-time” physicians work 8 to 9 hours a day, and then complete charting and administrative work after-hours.  On the other hand, many physicians would work a half day, then completing such administrative work during the regular day resulting in a calculation of 8 or 9 hours of work.  A consistent approach needs to be developed to define clinical work and measure the hours of such work.
  8. We strongly believe that the review of data used in relativity calculations needs to be comprehensive and needs to include all the key elements, including income, overhead, and hours of work.  The OMA has been continually updating income data over the past 5 years, and has made some minor adjustments to overhead data, but the hours of work data has not been reviewed or updated.  We believe that income data is relatively easy to obtain, often through OHIP data taking into account after-hours modifiers, and overhead data can also readily be obtained and can often be modelled to ensure reasonable overhead calculations.  However, hours of work data should to be obtained in an observational manner and based on consistent definitions of what constitutes clinical work.  We believe this can be readily obtained and at a reasonable expense by utilizing basic research methodology.  A randomized approach should be used to select a representative sample of a Section's membership for observational study to record hours of work in a typical week, and compare such work hours to the OHIP billings data from that week.  The observational work can be conducted by students involved in a research endeavour for the OMA.  Patient confidentiality and privacy can be readily assured.  Section leaders should support mandatory participation for randomly selected clinicians.  The random selection of participants would enhance validity of the data compared to the current self-reported data of self-selected Section members, with much of the data determined from the participation of only one or two Section members.

We strongly support the goals to improve and ultimately achieve relativity for all physicians.  We believe the OMA's current initiatives can and should be improved.  We would be happy to provide further input and engage in a discussion of the items and concerns we have raised, and on the issues raised by others.  We hope and expect there will be a fullsome review of all the methodology and data issues related to CANDI and relativity.

Sincerely,
Jeff Kolbasnik, Chair, OMA Section on General Surgery
Chris Vinden, Vice-Chair, OMA Section on General Surgery

January 21, 2016
Re: CANDI Review - Correspondence from OMA President Mike Toth to Section on General Surgery

January 21, 2016

Dear Section Chair,

The issue of income relativity continues to be a significant issue for physicians and the Ontario Medical Association.
At the Fall 2015 meeting of Council, a motion was passed requesting additional research on relativity including soliciting written input from all Sections on the current relativity methodology used by the OMA - the CANDI (Comparison of Adjusted Net Daily Income) methodology.

I kindly ask that your Section Executive review the CANDI methodology, which has six main components listed below, and provide us with your comments on whether they are satisfactory or require further refinement:
 Gross Daily Income
 Non-FFS Modifier
 Overhead Modifier
 Hours Modifier
 Training Modifier
 Skill Acquisition Modifier

Your general input on the CANDI methodology is welcome. For your reference, the attached Appendix and links below will direct you to a number of CANDI relativity methodology resources on our OMA Members website:
1. A description of the CANDI Methodology in full detail.
https://www.oma.org/Member/Resources/Documents/CANDIReport2009.pdf
2. An update to the CANDI methodology incorporating the results of the Pricewaterhouse-Coopers survey and OMA section’s input undertaken by the CANDI Relativity Implementation Committee.
https://www.oma.org/Member/Resources/Documents/CRICRelativityMethodologyReviewApril2012.pdf
3. The October 2015 CANDI methodology technical update undertaken by ERA.
https://www.oma.org/Member/Resources/Documents/TechnicalCANDIRelativit%20Methodology.pdf
4. All other documents relating to Relativity can be found on our relativity home page.
https://www.oma.org/Member/Resources/AgreementCentre/Pages/Relativity.aspx

In order to enable us to report to Spring 2016 OMA Council, we ask that you forward us your feedback in writing no later than March 11th, 2016. Please send your submission to Kate Damberger, OMA Economics, Research & Analytics, at ....

Sincerely,
Dr. Michael Toth
President


Appendix: Current CANDI Relativity Methodology
The current CANDI methodology is based on the notion that the relativity between OHIP specialties can be determined by comparing an appropriately adjusted net clinical income per day. Specifically, the methodology starts with the gross daily income (GDI), defined as the average daytime (7AM-5PM), weekday (Monday to Friday) fee-for-service billings per day per physician for each OHIP specialty. This definition excludes weekends, holidays, days with no billings, and after-hour billings during weekdays. The GDI is then adjusted by a set of six modifiers to account for the non-fee-for-service sources of income, frequency of billing, hours of work, overhead, and years of training.

The Non-Fee-For-Service Modifier adjusts the GDI (which comprises only the professional fee-for-service payments) to also include non-fee-for-service clinical sources of income, such as alternative payment plans (APP), primary care payments (e.g. Capitation and Comprehensive Care Management fee), Hospital On-Call (HOCC) funding, Workplace Safety and Insurance Board (WSIB) claims, non-fee-for-service psychiatric payments (e.g. mental health sessional fees and psychiatric stipends, Assertive Community Treatment, Divested Provincial Psychiatric Hospitals, Ontario Psychiatric Outreach Program), and Paediatric Hospital Stabilization Fund. The Per Diem Modifier accounts for the fact that some fee-for-service codes are paid for services rendered over an extended period of time (e.g. week, month, or year). The Hours of Work Modifier accounts for differences in the actual hours of work during the daytime weekday period, while the Overhead Modifier adjusts the GDI for expenses that physicians incur for earning their clinical income. The last two modifiers, the Opportunity Cost Modifier and the Skill Acquisition Modifier adjust the GDI for foregone income and the value of additional skills, respectively, because of differences in the length of medical training between OHIP specialties.

The GDI, adjusted for these six modifiers, is known as the Adjusted Net Daily Income (ANDI) and is calculated for each OHIP specialty and for specialists as a group (the target ANDI). The relativity position of each OHIP specialty is then determined as the ratio of the specialty’s ANDI to the target ANDI. The most recent CANDI results, for fiscal 2014/15, are presented in Table 1.

March 22, 2012
Re: Relativity, CANDI, and CANDI Studies - Correspondence from Section on General Surgery to CRIC (CANDI Relativity Implementation Committee)

March 22, 2012.

To: CRIC members
RE: CRIC Draft Report

We have reviewed CRIC’s draft report and offer the following comments.

We are very dismayed that CRIC has rejected practically all of the substantive concerns and recommendations offered not only by our Section but also by many other Sections.  The implementation of hours of work data into the ultimate relativity calculation was always an expectation once data was collected, and the other two changes recommended by CRIC are minor.  We feel strongly that serious issues regarding methodology and data quality have been raised, and are appalled that these have been roundly ignored and dismissed by your committee.

It seems pointless to raise further concerns, but we will mention a few.  The opportunity cost modifier and skills acquisition modifier are important issues for most specialty groups, and the minimal Royal College years of training do not adequately measure true length of training.  The CAPER data the committee refers to does not appropriately incorporate true length of training.  In our Section, many members train well beyond 5 years; most subspecialists such as colorectal surgeons or hepatobiliary surgeons routinely train an additional 2 years beyond their 5 years of general surgery, and are full members of our Section.  Many others in academic practices obtain even more training.  Furthermore, the committee’s comparison to a mean number of years rather than the average number of years is misleading.  To exaggerate the point, if 51% of physicians train for 5 years and 49% train for 25 years, the mean years of training remains 5 years while the average length of training is closer to 15 years.  Also, the opportunity cost modifier and SAM modifier do not adequately measure the benefit of accumulated wealth early in one’s career, and the compounding that occurs over time.  These modifiers need to be adjusted.

It is clear the committee has now rejected PricewaterhouseCoopers’ assertion that the data in their report is accurate and highly reliable.  The committee has decided not to use the PwC data for income, and our own Section’s submission has objectively demonstrated how flawed the income data is.  However, the committee subsequently rejects the view of most Sections that the low response rate to the survey raises serious doubts regarding the accuracy of the data, and insists on cherry-picking which data from the report to rely on.  In particular, our Section has serious concerns about the hours of work data.  While general surgeons are felt by most to be amongst the hardest working specialists with amongst the longest hours of work (including daytime 7 am till 5 pm work), the PwC data suggests we work far fewer hours then most of our colleagues.  We are convinced this data is inaccurate, and has severely disadvantaged our specialty.  CRIC cannot acknowledge doubts regarding the accuracy of the data in the report, and then rely on this data for such critical calculations.

We would remind CRIC members that PricewaterhouseCoopers were also the auditors for ORNGE, the provincial air ambulance system now mired in financial and operational scandal.  Tens of millions of dollars remain unaccounted for in a forensic audit of its finances.  We cannot understand the blind trust that CRIC members have given PricewaterhouseCoopers and its work, particularly when this company has failed to maintain public trust in its duties with another provincial health care company.

We believe a concerted effort to obtain accurate data on both hours of work and true opportunity cost are necessary and should be actively and urgently pursued.  We believe this can be accomplished in an effective and low cost manner.  We feel strongly that the efforts to address relativity thus far have been inadequate and based on inaccurate data, and that a thorough and reliable approach needs to be pursued.

Sincerely,

Angus Maciver, Chair, Section on General Surgery
Jeff Kolbasnik, Vice Chair, Section on General Surgery
Chris Vinden, Tariff Chair, Section on General Surgery

January 7, 2012
Re: Relativity, CANDI, and CANDI Studies - Correspondence from Section on General Surgery to CRIC (CANDI Relativity Implementation Committee)

January 7, 2012.

From: The OMA Section on General Surgery
RE: Relativity, CANDI, and CANDI Studies

We are writing to provide our input to the current deliberations on relativity, and specifically the methodological basis for the CANDI formula and the data and conclusions derived in “The OMA Study of Income, Overhead and Hours Worked”.  Our Section has significant concerns on how the OMA has handled each of these issues.

CANDI Methodology

1) The CANDI methodology is a pure fee-for-income approach to relativity.  It assumes that the activities of physicians from all specialties and family practice are worth equivalent amounts, after modifying for factors such as overhead and length of training.  In effect, the methodology discounts any notion of complexity and work intensity.  We disagree with this approach.  We believe a fee-for-service approach which recognizes differences in complexity, intensity, and risk for various services, or at least various types of services, is far more appropriate.  The OMA has argued that Sections can allocate higher fees to various types of services in an intrasectional manner.  For example, surgical sections may choose to make operative fees relatively higher valued than office work fees, to account for differences in complexity and work intensity.  However, this creates a situation where the office work of surgical specialists would be paid at a lower level than the office work of others, particularly those who are exclusively office-based.  Taken to its logical conclusion, the current approach would pay surgeons and surgical assistants a similar hourly fee, while obviously the intensity, complexity, and risk are far higher for the surgeon.

2) We agree that non-fee-for-service payments, such as HOCC and capitation payments, should be included in the calculation of income.  However, such changes to the calculation of income are not unique to the CANDI methodology, and could be easily adjusted in other methodologies such as RVIC.

3) We do not agree that the training period opportunity cost should be calculated based solely on the minimum years of training by specialty as required by the Royal College.  In many specialties, additional training is expected if not required for many work opportunities.  For example, in general surgery, a number of subspecialties exist, such as colorectal and hepatobiliary surgery, and these routinely require an additional 2 years of fellowship training.  Many surgeons in academic practice undertake not only fellowship training but additional graduate program training.  Even surgeons in community practice often undertake an additional 1 to 2 years of training to be able to secure jobs.  We believe a true accurate length of training modifier can be easily derived by assessing the actual length of training undertaken by physicians in practice.

4) We furthermore believe that a real assessment of opportunity cost should consider the length of career within a specialty rather than merely length of training.  For most surgical specialties, the physical rigors of the job limit career length, and with advancing age, many surgeons are forced to limit their practice to less remunerative activities, such as minor procedures or surgical assisting, or to retire earlier than one would in a specialty that did not have the physical rigors of a surgical career.

5) We have additional concerns with the methodology used in the determination of opportunity cost.  We do not believe a simple comparator to income in family practice is appropriate.  Furthermore, the formula fails to account for interest and inflation.  Using a discount rate of 5%, as in most academic papers assessing opportunity cost, the average opportunity cost of an additional 3 years of training would be 19.5% rather than the 8.1% calculated for the CANDI methodology.  

6) While we are ambivalent regarding CANDI’s omission of income from private sources (eg. cosmetic surgery), we do not believe the overhead calculations appropriately account for the overhead related to these services.  For physicians providing both public-funded and privately-funded services, a substantial portion of the overhead may be related to the latter, yet in the CANDI formula and data, it may all be apportioned to the public side.

7) We again note our concern that over the past few years, relativity funds have been apportioned purely based on daytime income, without adjustment for hours of work.  This approach has specifically disadvantaged physicians who work the longest hours.

8) We do not agree with the exclusion of after-hours income and after-hours work data from relativity calculations.  Such after-hours work is a well-recognized component of medical practice.  It is integral for emergency medicine and for major services providing on-call work.  For some Sections, it is a major component of income.  Some Sections, such as anesthesia and obstetrics, often adjust clinical practice such that physicians do not work the day post-call, to account for the substantial component of clinical work delivered after-hours.  We believe this income should be included in income calculations, with an accompanying adjustments for hours worked with a modifier to account for the hours worked being at “unsociable hours”.

9) The current CANDI methodology, and the subsequent studies, do not adequately account for work done after-hours, but associated with daytime income.  For example, many physicians provide after-hours MRP care to hospitalized patients, yet do not earn any extra income for this work.  Many others perform work such as dictating clinic notes after-hours, even though this is an integral part of the service for which income is generated and considered daytime income.  These issues are most acute for surgical specialists; given constraints of providing much of our hospital-based work during regular daytime hours, surgeons frequently carry out activities such as dictating notes or seeing in-patients after-hours, without any accompanying recognition of these work hours in the CANDI methodology or in The Study.  The three authors of this response would estimate we each provide approximately 1-2 hours of work per day which is not being recognized as work hours in this methodology.  We believe the CANDI methodology and its application are specifically biased against surgical practice.

OMA Approach to Obtaining Data on Income, Overhead, and Hours Worked

The Section on General Surgery has participated actively in the OMA’s activities and deliberations regarding relativity over the past few years, both at a Section level and through our involvement with the Surgical Assembly.  We met a number of times with the RVIC Methodology Review Working Group.  We expressed our concerns regarding the CANDI methodology.  We considered, and agreed with, the 3 main criticisms the Working Group has heard from Sections regarding the RVIC methodology and data, those being that:

1) The data is self-reported and not verified as accurate.
2) The data is now old.
3) Many income sources were not included.

We completely agreed that the data needed to be updated and more current, and that a broader range of income sources needed to be included in the calculation.  Please note that these changes did not require developing a new methodology such as CANDI, and that RVIC could be adjusted for these.

Most significantly, though, we agreed that the data cannot be self-reported, and should be objectively obtained in a manner to ensure accuracy.  We were assured by the Working Group that such objective data collection would be done.  The Working Group postulated that such an exercise would likely cost several million dollars, and would involve real-time observations and audits of physician practices.  There seemed to be broad agreement that while this exercise would be complex and costly, it was a worthwhile investment and absolutely necessary given that hundreds of millions of dollars were being allocated annually based on relativity calculations.

We were thus horrified to learn, just prior to a Council meeting, that the OMA Board had already hired a consulting firm to conduct these studies, with a budget of only $600,000, and that the studies would be conducted again as a self-reported survey rather that objective observations and data collection.  Debate at Council to reconsider the data collection process was ruled out of order, as the OMA had already signed a binding contract with PricewaterhouseCoopers which stipulated the nature of the data collection.  We were thus very upset that such a major change in study methodology was approved by the OMA Board without additional input from Sections.

It is also worthwhile to note that the other consulting firms which expressed an interest in the project ultimately advised the OMA Board that it was not possible to obtain accurate, valid, and objective data on income, overhead, and hours of work for the limited budget which the OMA assigned to this project.

Furthermore, the OMA has refused to share the nature of the commitments and obligations agreed to by PricewaterhouseCoopers in its contract for carrying out these studies, despite assurances from OMA legal counsel that the contract is not confidential and can be shared with physicians and even the public at large.  It is important to know whether the OMA Board set performance expectations for PwC, and whether those performance obligations were met.  We would consider the OMA Board to have been negligent if such obligations were not set in the contract, and we would consider PricewaterhouseCoopers to not have fulfilled the contract if such obligations were set and not met.  Most significantly, it is important to know whether PwC and the OMA Board agreed to a minimum physician participation rate in the study in order to ensure accuracy and validity, and whether this participation target was met.  We would again suggest that the OMA Board would have been negligent if it agreed that the extremely low participation rates achieved in the studies were adequate to ensure accuracy. 

We also note that the OMA Board and the CANDI Relativity Implementation Committee (CRIC) have been excessively protective of PwC, refusing to force them to specifically account for statements and assurances of data accuracy, reliability, and generalizability.  We have been refused requests to have a PwC representative attend Assembly meetings to discuss these studies, despite the fact that CRIC members and OMA Economics staff have not been able to answer specific questions related to these studies.  We have even had the CRIC Chair blame physicians in general for any inaccuracies in the studies, rather than PwC or the OMA Board.  It is her opinion that lack of physician participation in the studies has nothing to do with the ambiguities and complexities in the survey and subsequent site visits, despite warnings from my own Section and others that the PwC approach would not work.  If indeed the payment to PwC depended on achieving performance targets, we should be even more skeptical of their repeated and unsubstantiated comments that the data in their report is of “high quality”.  We find it remarkable that the OMA Board has allowed PricewaterhouseCoopers to rate the quality of its own work, particularly when its payment may be dependent on such quality, without an independent audit and assessment of its work.

It is also noteworthy that the OMA Board refused to give Sections access to the raw data used by PwC to make their calculations and reach their conclusions.  Many of us believe this data, and its interpretation, is inaccurate, yet the lack of access to such data limits our ability to present these concerns.  Even the Chair of OMA Council commented to one of us that surely the OMA Board would have to give Sections access to such data in order to allow meaningful concerns to be raised.  The Board’s interpretation that such access would compromise assurances of confidentiality for study participants is absurd.  Physician identifying data would certainly be removed from the data presented.  Indeed, the Board’s assertion would mean that the Board itself breached such confidentiality when it presented the report, which included data from 7 Sections where only a single physician’s practice was assessed.  We are concerned that the OMA Board wishes to limit access and transparency in order to limit criticism of its own work.

We would also note that the studies contemplated two years ago were meant to provide definitive data on which to base relativity allocation decisions.  Even the OMA Board is now backing away from endorsing the PwC data, calling the studies “another piece of the puzzle” and some “additional data” to consider.  Several members of CRIC and the Board have commented that some of the data in the report doesn’t pass the “sniff test”.

The Studies and the Final Report

We again re-iterate our grave concern that these studies were carried out as a self-reported survey, rather than as direct observational studies.  The surveys themselves were tremendously flawed and ambiguous, and did not capture the true nature of medical practice.  There are many models of direct observational studies that could have been pursued, many at far lower cost than was achieved by paying a high-priced consulting firm to carry out inadequate work.  The subsequent “validation” site visits are a misnomer, as the only validation activity carried out during these visits was confirming that the physician provided the answers reported on his or her survey.  No actual validation of hours worked was carried out.  The only objective income data element obtained was the gross income earned, without an understanding of how much of it was due to publicly-funded services, and how much may have been earned after-hours.  The only objective overhead expense data element obtained was total expenses reported on tax returns, again with no understanding of how much of this overhead may not be related to publicly-funded services.  We thus cannot have confidence in the accuracy or validity of the data presented.

The participation and sample size of physicians surveyed is a major concern.  PwC states that 8.7% of physicians participated.  However, when one reviews Table 15 (p. 75), it is evident that a number of surveys were then excluded, bringing the true participation in the survey portion down to about 7.5%.  Then, when physicians were contacted to arrange site visits, one-third refused the site visits, and another one-third didn’t even return the call.  Thus, the true participation rate in the study is only about 2.5%.  It is important to note also that this is not a random sample, but rather self-selected participation.  It should be self-evident to all physicians that such a low participation rate would leave the accuracy of the results in doubt.  Furthermore though, the data needs to be assessed on a Sectional level, and thus the participation for a number of Sections is far lower than even 2.5%.  PricewaterhouseCoopers continues to insist the participation rate is sufficient to be confident the data is accurate, but we would note that in the Negotiations Survey presented to Physician Leaders at the October 15/16 OMA meetings, our consultant for that survey claimed that the 13% physician participation rate was inadequate to draw any accurate conclusions at a Section level.  Most of us would find the latter assertion far easier to believe.  The suggestion that a 2.5% participation rate is adequate to ensure data accuracy seems absurd. 

We would point out that in the Board’s own presentation to OMA Council in November, 2009, the Board cautioned that “regardless of methodologies used (surveys, audits, focus groups, etc.), results will be based on samples – may have only few dozen observations per Specialty”.  Yet in The Study, only two Sections (Anesthesia and Family Practice) had even two dozen physicians participate.  Twenty-two Sections had less than a dozen participants, despite the fact that a number of Sections were even grouped together.  We thus suspect that even the OMA Board never imagined that the study would be based on such low participation and that the study results would be so suspect.

We would further recommend that CRIC and OMA Board members read the article “Users’ Guide to the Surgical Literature: How to Assess a Survey in Surgery”, published in the December, 2011 edition of the Canadian Journal of Surgery.  This guide is valuable not only for surgical studies, but any survey-based studies.  Quoting from the article: “Many investigators consider a response rate of 70% adequate for generalization to the target population, though this may vary according to the purpose and nature of the study.  Some investigators consider a response rate between 60% and 70% (or less than 60% for controversial topics) acceptable.”  Please compare that to the response rates of less than 10% in the PwC studies.

PricewaterhouseCoopers suggests in Appendix C that the site visits were selected to obtain a broad representation of Sections and geographic locations.  However, it is evident that nearly every physician who participated in the surveys had to be contacted to obtain 390 physicians who ultimately agreed to site visits.  PwC states that “OHIP Specialties were allocated a minimum of 5 visits”, yet 14 of the 32 Sections had fewer than that, including 8 Sections (25% of all Sections) where no physician or only a single physician was involved in a site visit.  That rate leaves the data obtained very suspect in terms of its generalizability to all physicians in the Section in general.

We are further concerned that there are areas of the report which seem to be deliberate misrepresentations of the data.  For example, Table 6 (pp. 33-34) compares the income data from the survey to that used in CANDI thus far.  There are many dramatic variances to either the positive side or the negative side, ranging from -26.9% to +69.2%.  PwC concludes that the two sets of data are similar, with a variance of less than 4%.  However, that is a cumulative variance for all the Sections where the negative variances cancel out many of the positive ones.  The true average variance, whether positive or negative, is 21.8%.  Thus, there is significant discrepancy even in the basic income data between the PwC data and the objectively-obtained OHIP data used in CANDI.  For our Section, PwC overstates the income by 19.2%.  We are concerned that the PwC assertion is a deliberate misrepresentation, and may be indicative of a pattern of other misrepresentations in their report and presentations.

For our own Section’s data, we suspect the hourly overhead amount of $68 is fairly accurate, but we believe the hourly income of $267 is too high and the clinical hours per day of 7.8 is too low.  On the income line, even assuming the 7.8 hours per day is accurate, that would equate to an income of $2,082.60 per day, $10,413.00 per week, and $489,411 per year (for a 47-week work year).  Since this number is supposed to represent only day-time income, and assuming about 18% of general surgeons’ income from after-hours work, that would mean a total income of about $577,504.98 per year.  Yet we know from OMA Economics data that the average general surgeons’ billing income is $373,528.56.  This represents a difference of over $200,000.00!!!  Additional non-fee-for-service income, such as HOCC and APP income, would account for a tiny portion of this difference.  Thus, the income calculation for our Section by PwC is bound to be inaccurate.  If the hours of work was accurate (certainly more than 7.8), then this income miscalculation by PwC would be even more dramatic.

In terms of hours of work, the report suggests general surgeons work half-an-hour less than the rest of our surgical colleagues, and in fact even less than anesthetists and gastroenterologists (two of our easiest non-surgical comparators).  We cannot believe that assertion is accurate, based on our and our members’ clinical experiences.  We also note from Table 25 (pp. 90-92) that while the mean hours of work for our Section is 7.8, the median is 8.3.  This difference reinforces our contention that with such small sample sizes, a few non-representative data elements can dramatically skew the results.  We are convinced that general surgeons work more hours per day than what is presented in the report.

We furthermore cannot understand why the data for general surgery and vascular surgery are being pooled.  We were assured that would not occur, and indeed there is no reason why the data cannot be presented and assessed separately.

We are certain the aforementioned concerns represent only a portion of the inaccuracies in the report by PwC.  Their methodology for the study was quite simply unreliable, and the participation rate was too low, for there to be any confidence in their conclusions.

We are disturbed as well by the deliberation of PricewaterhouseCoopers to “sell” their report as accurate and reliable.  The very first page of the report is devoted to 3 glowing quotes, attributed to un-named physicians, praising the studies and CANDI methodology.  Nowhere do we find constructive criticisms of the studies, including those expressed at OMA Council, and those indirectly expressed by the 90%+ of physicians who refused to participate in these studies.   We thus do not believe the PwC report is fair, balanced, or objective.  We believe that PwC intentionally overstates the accuracy and reliability of their conclusions.

We are furthermore concerned by the clear and deliberate public misrepresentations of this work by OMA leadership.  Most recently, in the December 13, 2011 issue of “The Medical Post”, OMA President Dr. Stewart Kennedy is quoted misrepresenting the relativity studies as follows: “In the past, physicians were concerned the data were not a good reflection of their hours worked.  So we did a real-time thing, where someone from PwC (PricewaterhouseCoopers) comes to your office and sees how you work.”  Dr. Kennedy knows full well that no real-time study was done, and that PwC did not observe physicians working in their office but rather spoke to them regarding their survey responses.  Dr. Kennedy claims the studies give “good objective data”, which they do not (the data is not objective), and claims “you need the hardest, most accurate data you can achieve, so that’s what we are working on”, despite the fact that the OMA Board has rejected studies which would have obtained hard, accurate data and instead proceeded with a self-reported survey.  There have been previous misrepresentations of the OMA’s current work on relativity, particularly in “The Medical Post”.

Next Steps

Relativity is a real and critical issue for our profession.  Significant money from OMA-Ministry Master Agreements is likely to be devoted to relativity; in the past three years, over a billion dollars has been allocated based on relativity calculations.  Relativity is an issue that impacts not only physician income, but may impact specialty choices, physician distribution, and practice patterns.

It is thus critical that we achieve a fair, accurate, and reliable assessment of relativity.  We believe the data from the current studies is inaccurate and cannot be relied upon.  We believe the OMA should use this opportunity to review the methodology of the CANDI formula and to correct the inadequacies of this approach.  We believe additional elements of relativity need to be examined, and some current elements need to be refined.  We then believe the OMA should devote adequate resources to robust studies of the key elements of relativity, those being income, overhead, and hours worked, in order to obtain accurate and reliable data for use in relativity calculations.  We are hopeful that physicians will be supportive of a meaningful effort to obtain accurate data to ensure fair treatment of all members of our profession.

Respectfully submitted,

Angus Maciver, Chair, Section on General Surgery
Jeff Kolbasnik, Vice Chair, Section on General Surgery
Chris Vinden, Tariff Chair, Section on General Surgery


OMA Section on General Surgery
Submission Regarding Relativity, CANDI, Studies
(Dr. J. Kolbasnik – Jan.7, 2012)

Executive Summary

1) Relativity is a critical issue for our profession.
2) OMA activities on relativity over the past 4 years have failed to advance this issue.
3) The CANDI methodology is flawed in many aspects.
4) The CANDI methodology and subsequent work has failed to address the main flaw of prior relativity data, that being that the data was largely self-reported and not verified.
5) The OMA failed to deliver on earlier commitments to gather data in an objective and verifiable manner, specifically through direct observation of physician work activities.
6) The OMA failed to commit adequate funds and resources to ensure accurate studies and determination of physician income, overhead, and work hours.
7) The OMA Board engaged PricewaterhouseCoopers to carry out studies which it knew or should have known would be inaccurate and unreliable.
8) The OMA Board either failed to negotiate an appropriate contract, or fails to enforce the terms of the contract, with PricewaterhouseCoopers in a way that would ensure physicians obtained valuable and accurate data in the work that PricewaterhouseCoopers was engaged to carry out.
9) The OMA Study of Income, Overhead, and Hours Worked (“The Study”) was carried out as a self-reported survey, without verification of the data submitted.
10) The subsequent Site Visits failed to validate the data submitted.
11) The Study failed to yield an adequate participation or sample size to draw meaningful conclusions.  One quarter of all Sections analyzed had data from only a single physician.
12) Some of the data in The Study can be specifically and objectively shown to be inaccurate.
13) Much of the other data in The Study does not pass the “sniff test”.
14) PricewaterhouseCoopers has misled OMA Council, or exaggerated to OMA Council, regarding the reliability and accuracy of a number of elements of The Study.
15) The OMA owes physicians a concerted, well-resourced, and reliable effort to deal with relativity issues.  The current effort has failed to advance improvements in relativity, and threatens to create other inequities.

 

 

Below is an excerpt from the OMA website which outlines the Relativity history and the efforts of the OMA.

CURRENT: In the latest tentative Physician Services Agreement (July 11, 2016), the OMA and MOHLTC promises that the problems with the relativity issue will be addressed: "To manage the PSB (Physician Services Budget), there will be a modernization of the OHIP Schedule of Benefits (SOB) and other payments, with $100 million in permanent reductions in each of fiscal years 2017-18 and 2019-20, and based on relativity and appropriateness, through the co-management process."

The Post-2015 Review (2015 - )
At the end of abeyance period, the Council tasked the Economics, Research and Analytics (ERA) department to conduct a technical review of the CANDI methodology.  At its Fall 2015 meeting, the Council further tasked the ERA to conduct a survey of Section leadership on their views of relativity in general and to solicit written input from all Sections on the current CANDI methodology.

The Abeyance Period (2012 - 2015)
At its Spring 2012 meeting, the Council passed the motion “that a review of the CANDI Relativity Methodology be initiated in three years.”

The CANDI Relativity Implementation Committee (2009-2012)
The mandate of this committee was to implement the recommendations from the RVIC Working Group, which included the initiation and oversight of research studies on income, overhead, and hours of work.

The RVIC Methodology Review Working Group (2008-2009)
The mandate of this working group was to review the RVIC methodology.  This working group developed a new methodology known as the Comparison of Average Net Daily Income (CANDI) that was approved by Council at the fall 2009 meeting, and is currently used for relativity purposes.

Source: OMA Relativity (gated); OMA tPSA (gated)

Source: OMA