Don’t Believe Everything You Think: Two Key Reasons We Cannot Trust Our Own Experiences

By Marjorie Stiegler on November 3, 2014 in Availability Bias, Medical Decision Making, Memory Error
2
2

How do doctors gain experience

Are our memories true histories, or just stories? Experience is critical to acquisition of expertise, and yet memories of our experiences are often incomplete and sometimes incorrect.    Let’s consider experience, and how the same processes that lead to expertise may also lead to error.

In medicine, experience is largely based on chance – clinicians learn based upon patients and their ailments that so happen to come into the clinic or the ER or the OR.  Each clinical encounter is added to a mental catalog of cases, and the sum of these cases (or at least, what we can recall about them) is our experience. We could expand this idea to include cases that are vicarious – that it, having happened to someone else but we heard the story, either via someone we know, perhaps via the media, or though a formal case-teaching format like Morbidity & Mortality conference or Problem Based Learning sessions.    And we could consider further expanding this idea to include cases that we’ve studied or read about, but never experienced.  It is generally assumed that substantial experience leads to expertise, although  accumulation of experience is often not comprehensive, deliberate in sequence, or otherwise organized.

Even if we can count on chance (or increasingly in medical education, simulation) to ensure we have a full mental catalog of cases,  it is worth taking a moment to understand how memory works.  Let’s consider availability bias and memory reconstruction errors.

memories are incomplete and incorrect

Availability Bias

The readiness with which a case from our catalog comes to mind is often unrelated to the probability of it occurring now or in the future.  Indelible memories are usually linked to emotionally charged, often negative, experiences.   Memorableness is increased by frequency of encounter, but also by novelty.  Therefore, very common things are easy to remember, but so are uncommon “fascinomas” and “zebras”.    Memorableness is not the same as probability, and yet, if we’ve ever said to a colleague “I’ve been burned” – or had them say this to us – we know that memorableness drives our practices in a way that is anecdotal and not strictly evidence based.   Memorableness that causes us to overestimate the likelihood of an event recurring, or to allow a rare event to influence our daily decision behavior, is called  availability bias.   (Importantly, this is not always bad.  Many safety measures are rooted in sentinel events that are quite rare, but those safety measures are crucial nonetheless, because of the high-stakes outcomes that are not acceptable in any number.)

 

Memory Reconstruction Error

Humans do not do not literally remember real-world knowledge, like writing words on the page of a book and later taking that book off a shelf and reading the text.  Instead, we store and recollect them by their gist, making them shorter and more coherent than they initially were.  This validated concept, which informs the bulk of cognitive psychology even today, was first scientifically demonstrated by the famous psychologist Sir Frederic  Bartlett in 1932, and introduced into medicine in 1983 by Clancey.   Applying this concept to medical experience,  experts have large collections of schemas or “illness scripts” that guide recall of general (stereotyped) experiences and assist in predicting what will happen in the near future, thereby guiding decision behavior.   Each script allows experts to almost immediately recognize patterns and the presence of all or nearly all data pieces that fit the script.  With increasing experience, particularly for experts diagnosing routine cases, ‘‘reasoning through’’ a case plays only a small role (if any), as “Type I” thinking predominates.

type I and type II thinking in medicine. Thinking fast and slow in medicine.

From: Stiegler, Tung. Cognitive processes in anesthesiology decision making. Anesthesiology 2014 Jan;120(1):204-17.

Unlike words of a complete story in a book, all scripts contain empty “slots” for detail variables (because the simplified and shortened gist is all that is actually encoded in the brain).  These slots can be filled with information retrieved from memory, or inferred from the context, and sometimes with information that is presently known but wasn’t at the time of the original event.  The act of recalling an event activates the filling in of these detail slots, which is called script instantiation.   “Memory reconstruction error” describes the process by which detail slots become filled with recall intrusions – the inferred or mis-remembered information that was not actually part of the original sequence of events.  That is, some information is ‘‘recalled’’ even if it was not present in the original script.   This has been shown to occur in many domains, such as eyewitness reports and testimony of crime, and also in medicine.  Details which are “classic” for a particular script, but actually absent in the case presentation, were “recalled” or falsely identified by physicians.  Knowledge of the disease according to precompiled scripts has  led physicians to infer clinical findings that were expected to be present in the patient.  Studies documenting these phenomena have been present in the medical literature for nearly 4 decades.

Of course, all of this occurs below the surface of awareness.  Humans are not aware of filling in these script slots with inferred data; real data and recall intrusions are indistinguishable.

When we consider how memory works,  it becomes apparent that very same experiences that become an expert’s library of illness scripts –  and often lead to immediate and accurate recognition of medical situations – can lead those same experts astray.  Memory error may cause us to miss important differences between prior experiences and current situations.   Availability bias may cause certain scripts to be overrepresented compared to the real-life probability of occurrence.   Scripts lead to semi-automatic thinking, which lead to reduced consideration of alternative options.  This “premature closure” of diagnostic consideration – selecting the first and most “obvious” choice – leads to error when the true diagnosis is not common or is atypical.   Clearly, we cannot believe everything we think; we cannot trust that what we remember is reality.

 

Note:  As I wrote about last time, this post is intended to compliment my upcoming lecture as faculty for the Stanford School of Medicine course “Medical Education in the New Millennium: Innovation and Digital Disruption.”  Next week, we’ll discuss decision support options to reduce medical error based on these influences.  If you have a great error-prevention tool to share, please email me or leave it in the comments!

 

 

 

2 Comments

Add comment

Leave a Reply

Google+