2013-12-22

Written by: Teresa Chan MD, FRCPC   |   Peer reviewed by: Brent Thoma MD, MA

This piece is a Counterpoint piece in response to KevinMD piece:  J. Russell Strader, MD  Why graduate medical education is failing from December 19, 2013.  Read the original post here.

 

The Impetus for my latest Counterpoint

In his piece on the KevinMD blog, Dr. J. Russell Strader implied that because he’s interviewing candidates whom he believes are “woefully unprepared” to practice independently as cardiologists, it must be because all of graduate medical education training is broken.

He goes on to tell an unfortunate anecdote about his experience as a junior cardiology fellow where he was under-supervised and oppressed.  In his blog he states: “If I had to call the attending for help, it was a failure.  And the attendings let me know it.”  He feels this “House of God”-like culture is the reason for his competence.

However, there is no evidence that would suggest this is the case.

What’s wrong with his conjecture?  Wasn’t it just an opinion piece?

Throughout my medical training, it has been impressed upon me that we must not infer truth from data that is faulty or incomplete. There is a danger to generalizing too much, and the final common step for critical appraisal guides like the JAMA User’s Guides to the Medical Literature is to ensure we are applying the data to the correct population.  Don’t even get me started on using anecdotes about therapies or observations.  These days, even the most junior of clinical clerks at McMaster University will look at me quizzically and ask “What’s the evidence for that?”

And yet, sometimes I hear really smart people over generalize and catastrophize based on anecdote and conjecture.  These are the same smart people who would instantly rebuke me if I said: “One time, we used blood-letting to cure a fever… So, my plan is to use that therapy.”

Opinion pieces are fine and dandy, but they should respect the work that has been done on these topics by thousands of medical educators. Non-medical readers may see opinion pieces like this, written by well-respected members of the medical community, and interpret them as true representations of medical education. For this reason, when we as physicians put our thoughts out for public consumption there is an inherent trust that we must uphold and it is imperative that we discuss how to do this better.

So, indulge me as I engage in some post-publication peer review. For assistance, I solicited comments from my Facebook friends (which I have posted with their consent) and the Twitterverse. I received a resounding response rate with many replies and personal messages.

****

Dear KevinMD Editors,

My name is Teresa Chan: I finished residency and passed my final examinations 5.5 months ago, and I am presently an attending physician. Please accept this open letter as a peer review of your piece posted on December 19, 2013.  This represents a culmination of multiple responses back to Dr. Strader’s piece.

There are 5 major faults in the arguments presented in this piece about why graduate medical education (GME) is broken.

1. Context is Everything

How are we to judge competence?

The author states:

As we have looked for additional partners for my practice over the years, a common theme has emerged.  The doctors coming out of current GME training programs do not know how to act as a[n] independent physician.  They are unable to decide the right test to order, or the right medicine to prescribe, or what to do in the middle of the night when they are called to the bedside of a patient whose blood pressure is dangerously low, or who can’t breathe.

I have not read any literature to support the general incompetence of all new hires and none is referenced here. If they do have a method of assessment that determined that they know that they don’t know which “test to order or medicine to describe” or what to do at 3am when a patient is hypotensive or apneic, I am sure the medical education community would welcome such a publication.  Of note, recent advances in this field (e.g. the script concordance test) have been called into question, and we are in need of such metrics.  As they do not state their methods in their piece I suspect it is simply an anecdote without the support of data or the literature, but I highly welcome their addition to the current literature if they have a novel methodology for assessing exiting residents.

Standards have changed

While medical education has changed, so have standards.  As Dr. Nicole Swallow states:

@TChanMD what about the safety of the patients when the decision was made w/o oversight by novices in the old model?

— Nicole Swallow (@doc_swallow) December 21, 2013

Patients are not fodder for residents and students to learn from by trial and error.  The Libby Zion case caused massive work-hours reform for GME but was in a large part about a lack of clinical supervision.  And so, we changed.  We changed because it was in the best interest of our patients to ensure that they received care from a well-supervised team. Life-or-death decisions are learning opportunities, but their high-stakes nature requires that even the most senior of learners be supervised closely with their answers vetted and the evidence discussed.

As I start my career I am humble enough to admit that when I am in doubt I will swallow my pride and discuss decisions that have patient-care ramifications.  Several other new attendings that I have discussed this with have also admitted incorporating such measures into our practice because patient care is more important than bravado.  I do not think this makes me a less capable doctor – and in fact, I think it is an act of true self-reflection to acknowledge my limitations and overcome them. The more senior members of my group have helped by role-modelling this behaviour.

There is more evidence to consider

So much has changed between 2005 and 2013.  In 2005, the iPhone did not yet exist, YouTube had just launched its first video (April 23, 2013 at 8:27pm), and Wikipedia was only 4 years old.  Medical knowledge is not static.

As I have blogged about before, each year there is a ridiculous increase in the amount of knowledge that we are accruing in the fields of healthcare.  The annual summary report of the Publishers Association reports that for scientific technical and medical (STM) journals 2010, there were 2000 publishers that collectively published 1.4 million peer reviewed articles per year in 23,000 journals.   In light of this massive insurgence of data and “evidence,” can we be so sure that it is merely work hours and “less harsh” GME training that is changing? Are graduates perceived as less competent because of their training or because they have way more to know than when the author trained? Do they still need to know the answer to every question, or do they need to know where to find the answer to every question? Does not knowing everything off the top of their head make them less competent if they can find an answer that is more comprehensive and up-to-date than one that has been memorized?

Understanding the Data Deluge: Comparison of Scale With Physical Objects (Infographic created by Julian Carver of Seradigm in New Zealand http://www.saradigm.co.nz) (used by creative commons license)

2) Methods

It would be hard to get a large enough sample size from a few years of personally hiring cardiology fellows to infer any conclusions on their general competence even using a robust research methodology. I can not imagine the sample that the author has been exposed to more than a fraction of the residents from a single specialty in a small number of jurisdictions. From this data, can we truly generalize these statements to all graduate medical education, everywhere?

Because his blog implies that, and I beg to differ.

3) Wither the data?

We are only now at the dawn of the age of Competency Based Medical Education. We have no metrics and measurements on our previous generations of doctors on a comparable scale.  Perhaps, because current trainees are more robustly observed, they are better able to reflect and learn about their experiences, and are actually more savvy about their decisions.  Is Dr. Strader mis-interpreting a nuanced decision-making process that combines the latest literature and the exceptional on-the-job teaching as indecision?

Examinations

We have always had examinations scores to assess competence, but the psychometric properties of these examinations have changed as the state of medical education improved.  They have generally not assessed the managerial skills that Dr. Strader so ardently argues that junior doctors now lack.  Previously, if a fellow ‘served his time’ we considered them ‘done’ provided they passed their purely knowledge-based examinations. This shift has not been studied but the Hawthorne effect would suggest that attendings will actually be better prepared in these areas because attention is being given to them.

Historical Cohort Comparisons

Dr. Strader’s ultimate conclusion is that new doctors today are incompetent relative to the physicians that graduated when he trained. Does the author have any proof of his own competency as a recent grad?  Or did a program director just take the word of his supervising doctors (the same ones who were in bed sleeping while he was working) when they hired him?  Does he have any evidence that all doctors of his generation were automatically able to be instant managers of CCUs across the nation?  That they could make life-or-death decisions without batting an eyelash? Or that they never made a silent wish that their fairy-god attending would swoop in and just tell them the answer?  Where is the data that supports his claim that newly minted doctors from previous eras were always able to make independent decisions immediately on day one of practice?  And even if they were making the decisions… was it just that they deluded themselves into thinking they made the right ones?

Because, according to legend, they were fellows and residents who never dared to review with their staff and therefore were never able to learn from them in a timely fashion.  And yet, more recent evidence suggests that timely feedback by attending physicians improves learning and achievement.(Cox & Irby NEJM 2007; Ramani & Krackov Med Teach 2012)  Sure, they made independent decisions, but who knows if they were the best ones?  And shouldn’t we, as attendings, ensure that our patients get the best care they can get, and that our learners can still learn and feel independent in that scenario?  The deliberate practice model, as per Donald Schön‎, requires not just the act of practising, but of reflection and feedback (usually from an expert).

Transition Anecdotes

The author stated that:

“Ideally, the last year of one’s GME should be spent working as a de facto attending, being supervised only nominally.  In this way, the programs can assure that the graduates are able to hop from their training program into practice in a seamless transition… It doesn’t happen that way so much anymore.”

With no other evidence to discuss we enter the realm of the anecdote. To counter the author’s own experiences, I will state that after my five years of GME training, I felt prepared for practice. My very last residency shift was essentially the same as my very first shift as an attending physician, except, I finally felt the freedom of not having to run around after my staff to tell them what I had already decided.  In Emergency Medicine, we have the privilege of having direct supervision and observation for all of our shifts – until that very last day of residency.  Such supervision did not diminish my ability to learn to make critical decisions, and in fact a number of my attendings will attest to my steely resolve in convincing them of such things as intubations or admissions as a final year PGY5.

Having increased supervision does not automatically beget a decrease in decision-making, independent thought or ownership of one’s decisions. Most clinical teachers know learners to commit to a decision and this technique has become a widely disseminated part of the lexicon of teaching programs in recent years (e.g. the one-minute preceptor, which is now even taught to residents in their training programs).

Perhaps my training is different from the new hires you are meeting, but I think it is the mark of a great training program that can encourage autonomous thought (respecting dissent, encouraging discussion) so that learners can develop a sense of ownership over decisions, without ever being ‘hung out to dry’.

 

4) The Fallacy of Arrogant Perception

‘Arrogant perception’ is a term I learned when I attended my initial teacher training program at OISE/UT (which I completed prior to medical school).  It helps us understand power dynamics and their implications for how the dominant person views the ‘other’ in any relationship. If the dominant party assumes that his/her frame of reference and world view should be imperialistically applied across the board – that is ‘arrogant perception’.

In medicine, this perception is often seen in two ways:

between the teacher (dominant) and student (subjugated);

between generations of physicians (older = dominant as they are often employers and managers; younger = lacking agency within medical infrastructures).

This seems to be an example of an intergenerational arrogant perception. Although, there are only 8 years between when Dr. Strader finished his training and when I finished mine, his blog post cements his point of view within the dominant position as a ‘generation’ up from me.  He is now in the dominant position of hiring people and seems to be applying his perceptions of competence on his prospective hires and finding them failing.

Perhaps the author needs to be reminded that not everyone learns best in the way that he did.  Many of my fellow academic centre attending colleagues (who are emergency physicians or intensive care physicians) were quite saddened by this blog.

“[T]he critical decisions residents learn to make under the watchful eye of a preceptor are perhaps better for being made well rested: more evidence-based and more careful thought and attention to detail. The key is keeping the patient safe while still allowing the resident to think they are making completely independent decisions. [And when they're ready] to increasingly [add] complex and critically ill patients. It is the skills of the preceptor that determines this learning.” – Dr. M. Welsford

Other colleagues suggested that junior fellows should not be expected to run a cardiology critical care unit, nor be punished for asking opinions about managing life-threatening conditions – otherwise, what would be the point of their fellowship?  If they are failing to ask for help, it is a clear failure of the attending physician, not the resident.  Is it the role of a first year fellow to know they are out beyond their limits? Or is it the role of the attending to adequately provide them just enough of a push to get them their comfort zone into the their Zone of Proximal Development so they grow?

Senior learners also had negative reactions to this post.

“Unfortunately I think going back to “the way it was in my day” is to the detriment of patient care in many instances. I completely agree with Michelle [Welsford] in that preceptors have differing skills/styles to foster independence in learners. The best preceptors I have had were the ones who pushed me to make critical decisions independently but who were supportive, present, and made me feel like they would prevent me from harming a patient.” – Dr. D. Atrie, Chief Resident (PGY-5)

One Residency program’s twitter account, even chimed in:

@TChanMD independent decision making impt, constructive/specific feedback impt, “harsh” learning environment not necessary

— UC EM Residency (@UCMorningReport) December 21, 2013

In the comments back to Dr. Strader, one anonymous commenter states: “A new grad is a new grad. We all needed help right out of the gate.”  Similarly, as one person stated in my twitter feed:

@MedPedsDoctor @TChanMD In end all drs have to deal with occ uncertainty without backup yet have to act decisively with no right answers

— Adr Born (@ClinicalArts) December 21, 2013

Just because the author learned that way, does not make it the best way.  In fact, faculty developers would suggest that a skilled teacher should have many more tricks in their toolbox, as suggested in the BMJ article by Kaufman in 2003.  There is ample evidence in the medical education and patient safety literature that suggests the methods described in your blog are not ‘best practices’ in graduate medical education.

5) A Classic Case of Nostalgitis Imperfecta

Stella Ng (https://twitter.com/StellaHPE/) stated felt that this post was:

@TChanMD Reminiscent of millennial learner pieces, which are reminiscent of “when I was your age,…” tales.

— Stella Ng (@StellaHPE) December 21, 2013

Dr. Ken Milne (who is a Chief of Staff at a Canadian hospital and also the voice behind SGEM) sent me a note in support of my stance.  In it he equates the sentiment that this post is quite similar to the piece of classic comedy,  The 4 Yorkshiremen by Monty Python.

These comments reminded me of an earlier twitter-based discussion that I had with Eric Holmboe (an adjunct professor at Yale, and Chief Medical Officer at the American Board of Internal Medicine) a few months ago. He told me then that he had a term for the phenomenon of viewing one’s training through ‘rose-coloured’ glasses.  He refers to it as ‘Nostalgialitis  Imperfecta’.

@TChanMD @BoringEM good for you – really nice done and much needed to address the “nostalgialitis imperfecta” syndrome among folks of my age

— Eric Holmboe (@boedudley) September 8, 2013

The current system has been vetted more fully, with stakeholders (e.g. patients) being included more closely.  We have sought to balance patient welfare with graduated responsibility.  As one of my colleagues pointed out: “…our current system handles graduated responsibility better than the previous, both at providing support for juniors and independence for the seniors.”

Conclusions

This whole Counterpoint has boiled down to one thing: it’s time we got more critical about the blogosphere.  We need to stop posting unsupported and non-evidence based conjecture without so much as a shred of corroborating evidence from the robust body of literature that exists in medical education. When anecdotes are used the lack of evidence should be acknowledged and dramatic, overreaching conclusions should not be drawn. A blog is not a peer reviewed journal, but it should take the evidence into account when drawing conclusions. And when doing so, that evidence should be wielded critically – referencing the most robust and credible papers. One need only look at the December 4th edition of JAMA to see that there is great research being done in medical education.  In fact, there is also a whole journal published by the ACGME called the Journal of Graduate Medical Education.

My plea to the editors of KevinMD and other blog writers like Dr. Strader: Can we stop with the unsupported, non-evidence based discussions of how ‘today’s doctors’ are lesser than those before?  There is actually plenty of evidence out there that suggests you may be right (especially regarding highly technical specialities, wherein work hours may in fact be decreasing procedural prowess), but we need to be citing it.  Let’s have this discussion, it’s important.  But to borrow a phrase from celebrity chef Emeril Legasse: “Let’s kick it up a notch!” KevinMD, you can do better.

___________________________

P.S. Thank you to my friends and colleagues who allowed me to quote them from Facebook and Twitter on their responses to this blog.  Whether I attributed your name (with your consent) or not (i.e. you just helped me build my arguments), I appreciate the discussion!

Author information



Teresa Chan

Emergency Physician. Budding Medical Educator.

TwitterFacebook

The post Counterpoint: Why Graduate Medical Education will be fine appeared first on BoringEM and was written by Teresa Chan.

Show more