June 27, 2024
Opportunities for AI in Healthcare - Spencer Dorn (UNC)

The player is loading ...

Spencer is the Vice Chair of Medicine and the Lead Informatics Physician at the University of North Carolina at Chapel Hill. I and very thankful to him for sharing his insights and thoughts on healthcare with me. We talk about: - The future of electronic health records - Opportunities for AI in healthcare - How to fix physician burnout - His thoughts on purpose - Explainability in AI - Value based care Spencer Dorn : https://www.linkedin.com/in/spencerdorn/ Rishad Usmani: https://www.linkedin.com/in/rishadusm... HealthTech Investors: https://www.healthtechinvestors.com/
1
00:00:00,000 --> 00:00:05,800
But I think we need to break free of our current note writing process, right?
2
00:00:05,800 --> 00:00:12,640
We've all for not only have not only has the interface and basic technology not changed
3
00:00:12,640 --> 00:00:19,080
for decades, as you mentioned, but the process, the note writing process, the process of,
4
00:00:19,080 --> 00:00:24,200
you know, subjective, objective assessment plan, that's been around for many decades.
5
00:00:24,200 --> 00:00:29,960
I think it was Dr. I think his name was Dr. Weed created that in the 70s and 90s.
6
00:00:29,960 --> 00:00:38,480
And I think that's a, I think it served its purpose, but you know, rather than using AI
7
00:00:38,480 --> 00:00:44,400
to write the same notes we write now just faster and with less ease, maybe it's time
8
00:00:44,400 --> 00:00:49,680
to rethink how we capture information and do we need to write notes or can we just capture
9
00:00:49,680 --> 00:00:55,840
discrete packets and then later reassemble them kind of instantly based on what we need
10
00:00:55,840 --> 00:00:57,720
at that point in time.
11
00:00:57,720 --> 00:01:04,020
So I think over time, I think we'll move in that direction of like new paradigms for
12
00:01:04,020 --> 00:01:09,880
capturing information, new paradigms for displaying that information, probably moving
13
00:01:09,880 --> 00:01:17,640
beyond the mouse, the keyboard, the screen to more multimodal types of approaches.
14
00:01:17,640 --> 00:01:21,080
Voice would be a big one.
15
00:01:21,080 --> 00:01:26,240
And but yeah, who knows, who knows where we're going with all this.
16
00:01:26,240 --> 00:01:28,680
It's exciting to consider.
17
00:01:28,680 --> 00:01:30,200
Thanks so much for joining me Spencer.
18
00:01:30,200 --> 00:01:32,600
I'm excited to have you here.
19
00:01:32,600 --> 00:01:34,360
Thanks for having me Richard.
20
00:01:34,360 --> 00:01:38,240
To get started, tell me a bit about your undergrad career.
21
00:01:38,240 --> 00:01:43,240
You chose political sciences and then you decided to pivot to medicine, take me back
22
00:01:43,240 --> 00:01:48,600
in that time and help me understand that decision a bit more.
23
00:01:48,600 --> 00:01:56,200
Well, strangely, when I was six or seven years old, my father sat my twin brother and I down
24
00:01:56,200 --> 00:01:59,320
and said you could be a doctor or a lawyer when you get older.
25
00:01:59,320 --> 00:02:01,400
And my twin brother became a lawyer, I became a doctor.
26
00:02:01,400 --> 00:02:06,320
So I think it was in some ways, destined from the start.
27
00:02:06,320 --> 00:02:09,880
When I entered college, I was only 17 years old.
28
00:02:09,880 --> 00:02:15,280
And I thought I want to go into medicine, but I didn't really know and I thought maybe
29
00:02:15,280 --> 00:02:17,160
I should study other things as well.
30
00:02:17,160 --> 00:02:23,240
So I was interested in political science, government, and decided to do that in case
31
00:02:23,240 --> 00:02:29,280
either medicine didn't work out or maybe to make me a little more well rounded.
32
00:02:29,280 --> 00:02:34,520
But yeah, really, I was kind of pre-med from the get go.
33
00:02:34,520 --> 00:02:39,680
I just wasn't 100% sure and I want to learn other things.
34
00:02:39,680 --> 00:02:42,280
Okay.
35
00:02:42,280 --> 00:02:47,520
Let's move forward to you, Perseeing your Masters in Public Health and Masters in Health
36
00:02:47,520 --> 00:02:48,520
Administration.
37
00:02:48,520 --> 00:02:54,920
Why did you decide to do those degrees and how have they helped you?
38
00:02:54,920 --> 00:02:57,920
I just, I like to learn.
39
00:02:57,920 --> 00:03:00,000
I guess I've been in school.
40
00:03:00,000 --> 00:03:03,000
I'm still attached to a university, of course.
41
00:03:03,000 --> 00:03:06,520
So I guess I've been around academic institutions my whole life.
42
00:03:06,520 --> 00:03:13,200
And the first degree, the MPH, was part of a fellowship I did in digestive disease epidemiology
43
00:03:13,200 --> 00:03:19,120
and it came with the opportunity to pursue an MPH at the Carolina School of Public Health,
44
00:03:19,120 --> 00:03:20,120
which is world class.
45
00:03:20,120 --> 00:03:23,880
And it was all paid for, so I figured why not.
46
00:03:23,880 --> 00:03:30,120
I was interested in developing skills, research skills, and understanding methodology.
47
00:03:30,120 --> 00:03:34,640
The second degree came years later when I was working in healthcare already and I was
48
00:03:34,640 --> 00:03:41,400
developing interest in healthcare leadership, operations, administration, and my boss at
49
00:03:41,400 --> 00:03:46,120
the time again kindly offered to pay for me to get a degree at the same school of public
50
00:03:46,120 --> 00:03:47,120
health.
51
00:03:47,120 --> 00:03:51,320
Didn't have to travel, was able to do it on nights and weekends.
52
00:03:51,320 --> 00:03:53,760
So jumped at that opportunity as well.
53
00:03:53,760 --> 00:03:58,120
I can't say that those degrees necessarily shape what I do on a day to day basis.
54
00:03:58,120 --> 00:04:03,160
You know, just like medical school, you go to medical school, they teach you a lot, but
55
00:04:03,160 --> 00:04:06,440
who knows what we actually use when we practice.
56
00:04:06,440 --> 00:04:10,280
But I think it's a way of, or I think at least for me it exposed me to different ways of
57
00:04:10,280 --> 00:04:17,000
thinking, different people, relationships, communities.
58
00:04:17,000 --> 00:04:22,320
So I'm sure it's kind of influenced some of what I do, although I haven't thought of
59
00:04:22,320 --> 00:04:27,520
how it does on a day to day basis.
60
00:04:27,520 --> 00:04:28,520
What are the degrees?
61
00:04:28,520 --> 00:04:33,120
There were fun degrees to get, so I enjoyed doing it.
62
00:04:33,120 --> 00:04:36,040
What are you excited about when it comes to AI and healthcare?
63
00:04:36,040 --> 00:04:37,680
I know there's a lot of talk about AI.
64
00:04:37,680 --> 00:04:44,240
Replacing physicians, augmenting physicians, or up-leveling advanced care providers to provide
65
00:04:44,240 --> 00:04:46,840
the same level of care as physicians.
66
00:04:46,840 --> 00:04:51,440
Yeah, this question, this answer is kind of a cop out, but I've been thinking a lot about
67
00:04:51,440 --> 00:04:57,120
AI for the past several years, especially now that everyone's excited about it, having discussions
68
00:04:57,120 --> 00:04:59,000
like this.
69
00:04:59,000 --> 00:05:05,920
And I've come to settle on the greatest short-term opportunity of AI is not necessarily intrinsic
70
00:05:05,920 --> 00:05:08,320
to the technology.
71
00:05:08,320 --> 00:05:19,160
It's that it gives us a chance or a imperative to look at who we are as physicians, as clinicians,
72
00:05:19,160 --> 00:05:23,840
and what we do and how we could do it better, whether or not doing it better actually involves
73
00:05:23,840 --> 00:05:26,520
AI or any other technology.
74
00:05:26,520 --> 00:05:35,520
So really what excites me most about AI today is the chance to reflect, the chance to consider
75
00:05:35,520 --> 00:05:43,600
how healthcare should be where physicians and other clinicians and healthcare workers
76
00:05:43,600 --> 00:05:48,720
fit into the equation and how we can map out a better way forward.
77
00:05:48,720 --> 00:05:53,760
Of course, I believe that AI is part of the solution in some instances, but I think even
78
00:05:53,760 --> 00:06:02,800
more fundamentally, it's a chance for us to really do some deep contemplation.
79
00:06:02,800 --> 00:06:07,680
The FDA for quite some time has been okay with AI being a black box when it comes to
80
00:06:07,680 --> 00:06:13,320
radiology specifically, but now it's moving more towards clinical decision-making in terms
81
00:06:13,320 --> 00:06:16,240
of clinical decision-making support tools.
82
00:06:16,240 --> 00:06:21,280
Just curious to hear your thoughts on having a black box in that instance, and then where
83
00:06:21,280 --> 00:06:26,120
do you think liability should lie?
84
00:06:26,120 --> 00:06:30,720
And our state medical boards and our colleges in Canada have been very clear that liability
85
00:06:30,720 --> 00:06:34,200
lies with the physician using the AI.
86
00:06:34,200 --> 00:06:37,560
And I know you talked about this recently on LinkedIn as well, so I'd love to get your
87
00:06:37,560 --> 00:06:40,400
thoughts on that.
88
00:06:40,400 --> 00:06:45,360
Explainability is a key area to consider.
89
00:06:45,360 --> 00:06:49,400
We of course have to preface this by saying we don't understand how we come to many of
90
00:06:49,400 --> 00:06:55,360
the decisions we make when we're working completely independently.
91
00:06:55,360 --> 00:06:58,560
We don't know how many of the medications we prescribe work.
92
00:06:58,560 --> 00:07:04,240
We don't know how some of them we arrive at some of our diagnoses.
93
00:07:04,240 --> 00:07:10,720
So on the one hand, you can argue that the human mind is a bit of a black box as well.
94
00:07:10,720 --> 00:07:17,400
What's different with AI, of course, is computers are hard to hold accountable.
95
00:07:17,400 --> 00:07:25,520
If you made a diagnosis that was completely wrong and negligent, your board or your hospital
96
00:07:25,520 --> 00:07:29,960
or even the patient could have some recourse.
97
00:07:29,960 --> 00:07:33,280
And even aside from that, I think malpractice concerns are way overblown.
98
00:07:33,280 --> 00:07:39,680
I think we intrinsically feel we want to do what's right for people as physicians, and
99
00:07:39,680 --> 00:07:46,960
we feel that responsibility, that oath we take, do no harm first.
100
00:07:46,960 --> 00:07:53,120
So while we cannot explain many of the decisions we personally make and we can be a bit of
101
00:07:53,120 --> 00:07:59,320
a black box on the flip side, at least there's a level of accountability and hopefully alignment
102
00:07:59,320 --> 00:08:03,040
with the best interests of those we're taken care of.
103
00:08:03,040 --> 00:08:09,480
So when we move to AI, some of it, of course, is explainable.
104
00:08:09,480 --> 00:08:12,640
We conflate AI as being everything, right?
105
00:08:12,640 --> 00:08:17,320
But there is a lot of AI that is explainable that's very rules-based, but I think what
106
00:08:17,320 --> 00:08:21,520
you're alluding to are more of the machine learning models, and especially the generative
107
00:08:21,520 --> 00:08:28,520
AI, where detecting some patterns, creating some patterns, we don't know how it got there.
108
00:08:28,520 --> 00:08:34,040
And I think that is a key challenge for practicing medicine.
109
00:08:34,040 --> 00:08:37,120
Can we just accept the black box or not?
110
00:08:37,120 --> 00:08:42,520
What degree of confidence do we have in the output?
111
00:08:42,520 --> 00:08:48,200
And I do think there needs to be regulation because individual clinicians lack the expertise
112
00:08:48,200 --> 00:08:50,640
to evaluate or the sample size, right?
113
00:08:50,640 --> 00:08:56,480
Maybe if you're personally using it in your practice, working in a nursing home, maybe
114
00:08:56,480 --> 00:09:00,920
spit out three correct differential diagnoses in a row, but that's only three, right?
115
00:09:00,920 --> 00:09:03,200
You're bound to butt up against some limits.
116
00:09:03,200 --> 00:09:07,440
So I think explainability is a key challenge.
117
00:09:07,440 --> 00:09:08,800
I'm not sure what the answer is.
118
00:09:08,800 --> 00:09:14,560
I'm not a regulatory expert, but I think there needs to be some degree of regulation to ensure
119
00:09:14,560 --> 00:09:22,040
that the tools that we're using, we can trust, especially because we can't necessarily figure
120
00:09:22,040 --> 00:09:26,120
out how they arrive at their answers.
121
00:09:26,120 --> 00:09:32,880
Our human nature pulls us towards wanting a deterministic model of healthcare, specifically
122
00:09:32,880 --> 00:09:39,000
when we're faced with diagnoses which have a bigger impact on our health, like cancer,
123
00:09:39,000 --> 00:09:41,160
we want more surety.
124
00:09:41,160 --> 00:09:42,640
Medicine just doesn't work like that.
125
00:09:42,640 --> 00:09:47,880
It works on probabilities and AI, especially charity of AI, is in line with the probabilistic
126
00:09:47,880 --> 00:09:49,840
models of medicine.
127
00:09:49,840 --> 00:09:54,520
The big gap here is patients don't know this, and we don't share this data with patients
128
00:09:54,520 --> 00:10:01,720
because we think it will increase patient anxiety and in practice anecdotally it does.
129
00:10:01,720 --> 00:10:06,940
Do you think a better model of medicine is with the probabilities and the confidence
130
00:10:06,940 --> 00:10:10,240
intervals to an extent, or maybe just the probabilities because it's easier for patients
131
00:10:10,240 --> 00:10:12,520
to understand, are shared openly?
132
00:10:12,520 --> 00:10:16,680
For example, if you come in with a strep swab and we do a swab, if it's positive, there's
133
00:10:16,680 --> 00:10:19,320
95% chance you have strep one in 20.
134
00:10:19,320 --> 00:10:20,320
You don't.
135
00:10:20,320 --> 00:10:23,840
If we don't treat it, there's a one in 10,000 chance we'll progress to an abscess, one in
136
00:10:23,840 --> 00:10:25,760
a million, you'll die if we treat it.
137
00:10:25,760 --> 00:10:30,080
Three in a million chance you'll get rheumatic heart disease.
138
00:10:30,080 --> 00:10:38,000
Do you think this complete transparent way of medicine is just too much, or do you think
139
00:10:38,000 --> 00:10:42,320
this is where AI can really come in and spit out these probabilities?
140
00:10:42,320 --> 00:10:47,800
And really we can move towards patient-centered medicine where the eventual goal would be,
141
00:10:47,800 --> 00:10:51,440
you know, in 500 years the patient is their own doctor.
142
00:10:51,440 --> 00:10:54,480
It's hard to think about 500 years from now.
143
00:10:54,480 --> 00:11:01,160
I think that medicine is much less certain than people think it is.
144
00:11:01,160 --> 00:11:07,800
And we think, or at least the general public often thinks there's one right answer for
145
00:11:07,800 --> 00:11:10,360
each problem.
146
00:11:10,360 --> 00:11:14,640
And the strep throat example is a good one because strep throat actually, we kind of
147
00:11:14,640 --> 00:11:17,960
have pretty good information, right?
148
00:11:17,960 --> 00:11:19,120
Swab the patient.
149
00:11:19,120 --> 00:11:21,000
Is it positive or is it negative?
150
00:11:21,000 --> 00:11:24,960
If it's positive maybe you have to worry about colonization and young children, but for the
151
00:11:24,960 --> 00:11:27,680
most part you can say, okay, this is strep throat.
152
00:11:27,680 --> 00:11:33,000
You have these symptoms, you have a positive test, and then there's loads of evidence in
153
00:11:33,000 --> 00:11:36,640
terms of the best treatments for patients.
154
00:11:36,640 --> 00:11:42,360
And as you mentioned, the purpose of treatment, both to resolve symptoms and to prevent complications.
155
00:11:42,360 --> 00:11:46,720
But most things in medicine are not as straightforward as strep throat.
156
00:11:46,720 --> 00:11:54,360
You think of things, you know, you mentioned cancer, cancer is not one thing.
157
00:11:54,360 --> 00:12:02,600
You know, even people with the same type of cancer, the same stage of cancer, there may
158
00:12:02,600 --> 00:12:09,280
be different diseases, different biology, certainly different preferences, different
159
00:12:09,280 --> 00:12:12,200
goals of care.
160
00:12:12,200 --> 00:12:18,400
So it's hard to, on the one hand, I think we must admit that medicine is much less certain
161
00:12:18,400 --> 00:12:20,440
than people realize.
162
00:12:20,440 --> 00:12:27,200
On the flip side, I think it's hard to have those conversations because it assumes that
163
00:12:27,200 --> 00:12:30,840
A, the patient we're working with wants to hear that.
164
00:12:30,840 --> 00:12:39,000
B, they have the interest, time, and capacity to make sense of the information that we present.
165
00:12:39,000 --> 00:12:45,000
So I think it's interesting what you suggest that perhaps you may either interspersed and
166
00:12:45,000 --> 00:12:51,000
help people understand how to listen, how to take risks, and how to use that to help
167
00:12:51,000 --> 00:12:52,000
people take drugs.
168
00:12:52,000 --> 00:12:57,000
And that's something that I've long thought of as a problem in the US.
169
00:12:57,000 --> 00:13:00,000
So, you've also been to a financial planner.
170
00:13:00,000 --> 00:13:08,000
So, in the old days, you went to a financial planner, they picked stocks for you, right?
171
00:13:08,000 --> 00:13:13,000
Like, it wasn't really that scientific, they picked stocks, they told you how much you
172
00:13:13,000 --> 00:13:17,000
should save, maybe somehow diversify your portfolio.
173
00:13:17,000 --> 00:13:24,000
Now, if you go to many of the larger financial planners, they'll run probabilistic models.
174
00:13:24,000 --> 00:13:29,000
If you, Alec, they'll figure out what your risk tolerance is, what your goals are, they'll
175
00:13:29,000 --> 00:13:34,000
try to quantify that using questionnaires, and then they can run models that show if
176
00:13:34,000 --> 00:13:42,000
you went this level of aggressiveness, these are the potential outcomes in 10, 20, 30 years.
177
00:13:42,000 --> 00:13:46,000
If you went with this strategy, these are the potential outcomes.
178
00:13:46,000 --> 00:13:53,000
So I think that's a potential way that we can use AI to better explain decision or the
179
00:13:53,000 --> 00:13:56,000
implications of decisions and help guide people.
180
00:13:56,000 --> 00:14:01,000
But as you know, some people just, I trust you, Doc, do what you think.
181
00:14:01,000 --> 00:14:03,000
And other people would want that.
182
00:14:03,000 --> 00:14:06,000
So, yeah, I do think there's an opportunity there.
183
00:14:06,000 --> 00:14:13,000
And I think it'll take time to figure out how we harness these tools, of course, but I do think there's some possibility.
184
00:14:13,000 --> 00:14:16,000
Let's move on to technology.
185
00:14:16,000 --> 00:14:20,000
The way we interact with technology really hasn't changed in about 30 years.
186
00:14:20,000 --> 00:14:25,000
We still use a screen, we still use a mouse, we still use a keyboard.
187
00:14:25,000 --> 00:14:34,000
I think the future of EHR is more like in minority report because where our interaction with the patient is preserved.
188
00:14:34,000 --> 00:14:38,000
And we're not distracted by a mouse or a screen or a keyboard.
189
00:14:38,000 --> 00:14:40,000
It's something more natural.
190
00:14:40,000 --> 00:14:42,000
How do you think of the future of EHRs?
191
00:14:42,000 --> 00:14:45,000
How do you think of the UI UX in that case?
192
00:14:45,000 --> 00:14:53,000
And what are some, what's two things you would change about the EHR if you could?
193
00:14:53,000 --> 00:15:00,000
You know, all those questions need a follow up question in which is what is the time horizon, right?
194
00:15:00,000 --> 00:15:16,000
I think at least in the US health systems have invested billions, individual health systems have invested hundreds of millions, but collectively many billions of dollars in adopting enterprise-wide electronic health records.
195
00:15:16,000 --> 00:15:19,000
And physician practices have as well.
196
00:15:19,000 --> 00:15:31,000
And if you've been part of that change, you realize how difficult it is and how unlikely it is that they want to go through that exercise anytime soon.
197
00:15:31,000 --> 00:15:39,000
Especially because there's so much of how health systems and practices run now are really centered around that technology.
198
00:15:39,000 --> 00:15:48,000
So I think over the short term, we're very unlikely to see moving towards newer electronic health records.
199
00:15:48,000 --> 00:15:58,000
I think we're more, much more likely to see layering tools on top of the EHR that can help with some of the challenges you mentioned.
200
00:15:58,000 --> 00:16:14,000
For instance, I'm sure your listeners know this most probably the widest clinical use of AI in healthcare right now, at least newer AI, not kind of rules based expert systems, etc.
201
00:16:14,000 --> 00:16:19,000
Is AI scribing ambient intelligence for clinical documentation, right?
202
00:16:19,000 --> 00:16:35,000
So liberating physicians from the keyboard from the mouse from the screen so they can just speak to patients, have a conversation and voila, the note appears in their inbox for them to edit and finalize.
203
00:16:35,000 --> 00:16:47,000
So that's an example, I think of how we will layer technology on top of the EHR rather than kind of scrap the existing EHRs over the short run.
204
00:16:47,000 --> 00:16:52,000
Another example, I think a good one would be a summarization tools, right?
205
00:16:52,000 --> 00:17:14,000
All this not only are we overloaded with information in healthcare and in the EHRs, but the information scattered in many different locations. So trying to figure out what went on when is often very difficult, takes a lot of time, sometimes, sometimes riddled with errors and limitations.
206
00:17:14,000 --> 00:17:29,000
So again, so layering on a summarization tool and AI tool that can summarize the existing EHR data on a specific patient or on populations of patients can help kind of overcome some of those challenges.
207
00:17:29,000 --> 00:17:38,000
So I think over the short term, that's what we're more likely to see. We're more likely to see some incremental benefits by layering AI on top.
208
00:17:38,000 --> 00:17:43,000
I think the longer term question is interesting and obviously purely speculation.
209
00:17:43,000 --> 00:18:07,000
But I think we need to break free of our current note writing process, right? We've all for not only have, not only has the interface and basic technology not changed for decades, as you mentioned, but the note writing process, the process of, you know, subjective, objective assessment plan, that's been around for many decades.
210
00:18:07,000 --> 00:18:15,000
I think it was Dr. I think his name was Dr. Weed created that in the 70s and 80s.
211
00:18:15,000 --> 00:18:27,000
And I think that's a, I think it served its purpose, but you know, rather than using AI to write the same notes we write now just faster and with less ease.
212
00:18:27,000 --> 00:18:41,000
Maybe it's time to rethink how we capture information and do we need to write notes or can we just capture discrete packets and then later reassemble them kind of instantly based on what we need at that point in time.
213
00:18:41,000 --> 00:19:00,000
So over time, I think we'll move in that direction of like new paradigms for capturing information, new paradigms for displaying that information, probably moving beyond the mouse, the keyboard, the screen to more multimodal types of approaches.
214
00:19:00,000 --> 00:19:04,000
Voice would be a big one.
215
00:19:04,000 --> 00:19:12,000
And, but yeah, who knows who knows where we're going with all this. It's, it's exciting to consider.
216
00:19:12,000 --> 00:19:25,000
I cannot agree more with you, even in my almost 10 years of clinical practice, I've seen a shift from where notes used to be to outline our clinical thinking and for colleagues to follow us on.
217
00:19:25,000 --> 00:19:36,000
And they have shifted to be more of a tool of increasing billing and reducing liability where we see these notes, which are pages and pages long with just repeated information.
218
00:19:36,000 --> 00:19:54,000
Do you think our clinical encounters and notes need to sit somewhere that is removed from billing and liability to move forward into a recording process that actually captures our clinical decision making.
219
00:19:54,000 --> 00:20:12,000
Well, well, a few reflections. One is the note writing process. Yes, it serves a purpose for capturing information so your colleagues can read what you did and for billing purposes but one, in my opinion, this is something I often talk about.
220
00:20:12,000 --> 00:20:26,000
The note writing process is very valuable for the physician. It's an opportunity for us to synthesize information, clarify our thoughts and explicitly state what we think is going on.
221
00:20:26,000 --> 00:20:37,000
And I practiced, I was in medical school in the late 90s, early 2000s residency fellowship and the early 2000s. So I practiced for a little while on paper.
222
00:20:37,000 --> 00:20:51,000
And I wouldn't, we shouldn't pretend like those days were perfect because there were challenges, most of all getting information, right finding information had to walk to medical records or get faxes now everything is instantly available.
223
00:20:51,000 --> 00:21:02,000
But people were much physicians are much more selective with what they put in their note because they had to write it by hand. You couldn't just pull everything in with, you know, one keystroke.
224
00:21:02,000 --> 00:21:18,000
So the notes were harder to read but I think they're actually and harder to access, but I think they probably were higher in quality it's like that apocryphal Mark Twain quote that he would have written a shorter letter if he had more time I think people put more energy and effort into the notes.
225
00:21:18,000 --> 00:21:21,000
And it was really, you know, it reflected clear thinking.
226
00:21:21,000 --> 00:21:30,000
So I think that's one thing we should keep in mind with notes is the note should actually serves a purpose beyond billing.
227
00:21:30,000 --> 00:21:37,000
Clearly communicating with your colleagues with your future self but also it's, it's part of the act of thinking.
228
00:21:37,000 --> 00:21:45,000
So I would love for note writing to focus more on that and to be free more of some of the extraneous uses that we use it for.
229
00:21:45,000 --> 00:22:02,000
In the US, the, the E&M billing guidelines actually changed a few years ago where in the old days, there was a menu of requirements in order to bill different levels of service.
230
00:22:02,000 --> 00:22:18,000
Okay, so if you saw a patient in clinic, you had to, you know, state what their history their present illnesses, maybe their past medical history their family history, a 10 point review a systems document of certain level physical examination, certain
231
00:22:18,000 --> 00:22:26,000
level of medical decision making you added all that up and that determined what you were able to bill was at a level five console to level four, etc.
232
00:22:26,000 --> 00:22:44,000
Well, a few years ago, the government changed those regulations and started allowing physicians to bill based on time, based on the time they spent with the patient, including the time before and after the visit on the same day so pre charting,
233
00:22:44,000 --> 00:22:48,000
reviewing images, etc, etc.
234
00:22:48,000 --> 00:23:06,000
So, we actually now have the opportunity to break away from that old system, but what's ironic is that you physicians have and if you look at least epic last summer published data on this, our notes are actually longer, since those new regulations went into place.
235
00:23:06,000 --> 00:23:19,000
And then they were before. So I think it's easy for us to complain oh insurers oh the government they want us to do all this. When in fact often the challenges we need to change our behavior.
236
00:23:19,000 --> 00:23:35,000
Right. So, I would love to see note writing, go back more towards its purpose of an exercise and thinking and clear communication limited, limiting the amount that we're writing but, you know,
237
00:23:35,000 --> 00:23:39,000
explaining what we think is going on and then in the future.
238
00:23:39,000 --> 00:23:51,000
I think there's a great opportunity for software to summarize different packets of information to meet, you know, the need at that specific time.
239
00:23:51,000 --> 00:23:56,000
I don't know if I answered your question or went off on too long of a tangent but let me know.
240
00:23:56,000 --> 00:24:06,000
I'm going to share a phrase Charlie Munger show me the incentives I'll show you the behavior. If you had a magic wand, and you could change the billing you could change the regulations.
241
00:24:06,000 --> 00:24:21,000
What would you change so note writing moves towards the type of notes where you imagine which are, you know, we are not compromising brevity for clarity.
242
00:24:21,000 --> 00:24:40,000
I think it's already happened as I mentioned I mean there still are other, you know, risk scoring there still are other aspects I won't pretend like everything is fixed but time based billing that's that lets everyone off the hook I spent 12 minutes with the patient I spent four minutes before and after the visit.
243
00:24:40,000 --> 00:24:52,000
And that relates into this level of service and then just write what I want to write so. So that in part, I think it's some of it clearly you know when you mentioned incentives, the incentive problem.
244
00:24:52,000 --> 00:25:02,000
In my mind, the main challenges in healthcare are behavioral and behavior change and why do people be change behaviors as you mentioned Charlie Munger's quote.
245
00:25:02,000 --> 00:25:14,000
So, you know, there's a lot of difficulty there needs to be some sort of incentive to change behavior. And we're mostly stuck in a system where the incentives are misaligned between the different parties.
246
00:25:14,000 --> 00:25:31,000
It's cliche of course but we're still mostly paid based on the amount the volume of things we do more than the value of the services we provide. Of course it's hard to actually figure out value that's one thing that one key challenge to that movement.
247
00:25:31,000 --> 00:25:43,000
I think, I think trusting or finding other ways to assess the amount of work done besides the length of the note is a major positive.
248
00:25:43,000 --> 00:25:54,000
But physicians still need to find ways to change their behavior get out of their old way of doing things in order for it to make a difference.
249
00:25:54,000 --> 00:26:02,000
So, the thoughts in the pay wider model of healthcare with the pair also delivers the services, like the case or permanent model.
250
00:26:02,000 --> 00:26:09,000
Do you think that's a better model versus where the payer and service provider are separate.
251
00:26:09,000 --> 00:26:13,000
I think it depends is the answer.
252
00:26:13,000 --> 00:26:21,000
I think the Kaiser model is is wonderful. I think that the incentives are aligned.
253
00:26:21,000 --> 00:26:27,000
The physicians who work there generally are pretty happy now they're self selected group.
254
00:26:27,000 --> 00:26:32,000
I think the patients in general are happy not always.
255
00:26:32,000 --> 00:26:41,000
I think it's a nicely aligned system that frees physicians from having to do things like document extraneous information.
256
00:26:41,000 --> 00:26:47,000
Worry about how am I providing care I don't want to provide care over a patient portal because I'm not getting paid to do that.
257
00:26:47,000 --> 00:26:52,000
So I think there are many benefits to that sort of model.
258
00:26:52,000 --> 00:26:57,000
Now the flip side is Kaiser has had struggle expanding to other areas.
259
00:26:57,000 --> 00:27:12,000
They're, you know, they're largely in California, they're little pockets and Hawaii, Colorado, Colorado, Pacific Northwest, they're embarking on this new program as you probably are aware to expand to the East Coast through partnerships with guys in Go.
260
00:27:12,000 --> 00:27:17,000
And I think that's been announced when yesterday Moses Cohen health here in North Carolina.
261
00:27:17,000 --> 00:27:20,000
So we'll see if it if it can scale and spread.
262
00:27:20,000 --> 00:27:31,000
But there's something unique about Kaiser and that where it developed when it developed that's allowed it to stick that hasn't been replicated at scale and other places.
263
00:27:31,000 --> 00:27:38,000
But I do think that's a nice model I think I personally I'm a salary physician I work for a university.
264
00:27:38,000 --> 00:27:46,000
I tell my patients all the time I want to do it right for you I'm not getting paid an extra dollar whether I'm whether you do this test or not.
265
00:27:46,000 --> 00:27:57,000
I have zero incentive to do more than someone needs I just want to do the right thing and for me personally it's so it's a.
266
00:27:57,000 --> 00:27:59,000
It's a wonderful way to practice.
267
00:27:59,000 --> 00:28:13,000
And but I realize it's a it's a privilege to be able to practice in this type of environment so yeah I think ideally there is aligned incentives where there's alignment between those paying for the care and those providing the care.
268
00:28:13,000 --> 00:28:18,000
I don't know if it needs to be as integrated as a Kaiser.
269
00:28:18,000 --> 00:28:24,000
Or the VA is another example here in the US that's even goes farther than Kaiser.
270
00:28:24,000 --> 00:28:28,000
But yeah, I do think aligning incentives are critical.
271
00:28:28,000 --> 00:28:43,000
That's not to say that there are there are plenty of amazing practices nationwide that are followed different models so it's not to say it's the only option but I do think there's something very attractive to it, at least personally.
272
00:28:43,000 --> 00:28:48,000
If a startup was working on an AI scribe or a summarization tool.
273
00:28:48,000 --> 00:28:52,000
What would your diligence process be for onboarding it.
274
00:28:52,000 --> 00:28:57,000
What would make you pick one startup versus the other.
275
00:28:57,000 --> 00:29:03,000
I think the, the scribe tool and that's that's a busy market right now.
276
00:29:03,000 --> 00:29:08,000
I think with those tools they're becoming largely commoditized.
277
00:29:08,000 --> 00:29:22,000
And it's mostly a competition based on cost that's not to say that there are no features of the products that could differentiate but I think there's it's such a crowded marketplace that
278
00:29:22,000 --> 00:29:27,000
at the fair minimum they'll need to be able to integrate with the large HR's.
279
00:29:27,000 --> 00:29:34,000
They'll need to have a decent interface, you know, it must be usable.
280
00:29:34,000 --> 00:29:42,000
They'll need to have decent performance on, you know, assessments in terms of the fidelity of the note that's being generated.
281
00:29:42,000 --> 00:29:51,000
But I think ultimately it's going to come down to price largely until those scribe companies can move into other areas, you know, right now they're point solutions they're doing one thing.
282
00:29:51,000 --> 00:29:55,000
And there are a lot of companies that can do that one thing fairly well.
283
00:29:55,000 --> 00:30:04,000
So I think in that case it becomes largely a largely a cost question.
284
00:30:04,000 --> 00:30:15,000
In terms of summarization there are some companies in that area as well they're not as many, not as many good ones that I'm familiar with and I think I have a pretty good understanding of what's happening.
285
00:30:15,000 --> 00:30:27,000
I think, again, similarly, I think it's important to be able to do more than just one thing summarization. I look at summarization is almost more of a core competency than a.
286
00:30:27,000 --> 00:30:43,000
I'm not sure I'm not necessarily sure it's a product it's, it's a it's a it's a capability that I think links to so many different things that we do including note writing, including billing, including risk scoring, including discharge summary.
287
00:30:43,000 --> 00:30:49,000
I mean, so I look at summarization almost a little bit more as a more fundamental technology.
288
00:30:49,000 --> 00:30:57,000
And to me the question is what are the products that emerge from that. And what's the business case around around that.
289
00:30:57,000 --> 00:31:05,000
If you can do a great chart summary well. Okay, well, who's who's going to pay for that. And why, and how do you prove the value of that.
290
00:31:05,000 --> 00:31:17,000
So those are some of my thoughts I'm very, I'm very excited about summarization I think it's, it's necessary based on some of the things we were discussing about EHR with information overload and scatter.
291
00:31:17,000 --> 00:31:28,000
But I think there's a little bit more flushing out to do and to determine what are the actual products here that we're using this technology for.
292
00:31:28,000 --> 00:31:38,000
If EHR had a summarization tool built in and one dent, would that move the needle for you to switch EHRs.
293
00:31:38,000 --> 00:31:44,000
Yeah, I mean, as I mentioned earlier, I think few large organizations are switching EHRs anytime soon.
294
00:31:44,000 --> 00:32:00,000
These EHRs are tremendous investment that's been put into them. And there are so many different functions outside of what a physician does that they're used for that I don't think a summarization tool is enough to get all the physicians showing up with pitchforks to their
295
00:32:00,000 --> 00:32:10,000
to their IT departments and their CFO and saying we must switch now. And even if they did I don't know I'm not sure that that those decision makers would listen.
296
00:32:10,000 --> 00:32:33,000
I do think it's a valuable, a valuable feature that we increasingly need in EHRs and I, I think many EHR companies are working to build those that those tools themselves into their EHRs and you know the flip side is other companies are working to build applications that can be layered and integrated with the
297
00:32:33,000 --> 00:32:52,000
EHR. So, I think we're moving towards a world where either the EHR providers give that or the EHR vendors provide that or, and or there are startups or, you know, growth stage mature companies that are providing that as a service.
298
00:32:52,000 --> 00:33:08,000
The main issue I have with Valley Base Care is values measured by outcomes as we know the outcomes are determined by SC, status and race and gender. They're not determined by pharmacological or even behavioral intervention at all times.
299
00:33:08,000 --> 00:33:25,000
Do you think Valley Base Care should move towards the process where values measured by the process and all the outcome then Adam Granz talks about this, you know incentivize outcomes you people will cheat the system to for to get those outcomes if you incentivize someone to lose
300
00:33:25,000 --> 00:33:32,000
weight or a strict BMI they will starve themselves, as opposed to incentivizing them to eat healthy and exercise.
301
00:33:32,000 --> 00:33:47,000
Do you think there's some way we can move towards measuring the process because it's easier to measure the outcomes. That's the lazy way to measure value. I feel I would argue it's easier to measure process than outcomes if you look at the, you know, there are at least the
302
00:33:47,000 --> 00:34:07,000
earlier versions of the quality movement. You know, it was all about process measures. Right. Patient has this condition. Are you vaccinating them. Are you checking these labs periodically are you doing screening them for osteoporosis simple yes note checks and I think we're still largely stuck with those types of process
303
00:34:07,000 --> 00:34:27,000
measures in medicine outcomes are harder to assess. They don't write it takes time to see an outcome some outcomes maybe are clearly visible within 30 days but many are, you know, months to more likely years away.
304
00:34:27,000 --> 00:34:42,000
And that assumes someone staying in the same health system you have longitudinal records, they're staying with the same health insurer. So I think process measures at least in my experience here in the US tend to be relied on.
305
00:34:42,000 --> 00:34:58,000
And then the question is, what does that mean, right what's the, what's the, I don't say, what's the value of all these process measures. Hard to say, hard to say. I used to be a used to say I used to be a zealot equality measurement zealot.
306
00:34:58,000 --> 00:35:12,000
I participated in a lot of measure development rounds with my professional society and I was just really into it and then one day it dawned on me that it's really, really hard to measure quality.
307
00:35:12,000 --> 00:35:15,000
You know, it's one of those. I know it when I see it.
308
00:35:15,000 --> 00:35:37,000
It's just, it's hard to quantify this not saying we shouldn't try but it's just, it's really hard so we wind up moving towards select conditions that we can quantify value for quality for and then we select measures usually process measures and it makes me wonder how much or how good of a job are we actually doing.
309
00:35:37,000 --> 00:35:41,000
So, there's some thoughts.
310
00:35:41,000 --> 00:35:51,000
In your own department with the physicians you lead. How do you measure their quality how much do you rely on intuition and how much do you rely on structure.
311
00:35:51,000 --> 00:35:55,000
Largely intuition, largely professionalism.
312
00:35:55,000 --> 00:36:00,000
Peer review.
313
00:36:00,000 --> 00:36:04,000
Simple process measures like board certification.
314
00:36:04,000 --> 00:36:14,000
So it's more process measure based and intuition there are some kind of more outcome so in my field in gastroenterology we have one.
315
00:36:14,000 --> 00:36:30,000
In my opinion we have one excellent quality measure it's add an add anoma detection rate and add anoma is a precancerous polyp the purpose of doing a screening colonoscopy is to find these precancerous polyps to remove them before they could become anything harmful.
316
00:36:30,000 --> 00:36:36,000
And to risk stratify people that if they're likely to grow these things we're going to see them more often.
317
00:36:36,000 --> 00:36:45,000
And so, there's a measure called an add anoma detection rate which is of all the screening colonoscopies I do.
318
00:36:45,000 --> 00:36:51,000
What percentage of the time am I finding one of these precancerous polyps.
319
00:36:51,000 --> 00:36:58,000
So, that's something we pay a lot of attention to and in my practice.
320
00:36:58,000 --> 00:37:07,000
Because we believe it's valid we believe it's it's something that we can affect we we can show the variation across you know our 35 plus gastroenterologists.
321
00:37:07,000 --> 00:37:14,000
So, but that's they're few and far between, you know they're few and far between.
322
00:37:14,000 --> 00:37:17,000
What are your thoughts on capsule endoscopy.
323
00:37:17,000 --> 00:37:20,000
It's been around for quite some time now.
324
00:37:20,000 --> 00:37:28,000
But now we're seeing somewhat of a resurgence but startups getting into space as well.
325
00:37:28,000 --> 00:37:49,000
I'm sorry my dog's barking in the background. Capsul endoscopy is a useful tool if you use it for the right purpose. You know usually the small bowel despite its name is quite long and a small proportion of patients have GI issues that are not related to their upper GI tract or to their colon.
326
00:37:49,000 --> 00:38:10,000
It's possible lesion is in the small bowel so capsule endoscopy is a wonderful way of evaluating the small bowel, but it's not necessarily new and I'm not familiar with some of these innovations that you're describing.
327
00:38:10,000 --> 00:38:15,000
Why do you think physicians are burnt out.
328
00:38:15,000 --> 00:38:29,000
Yeah, that's a great question. And there's not just one answer right I think one problem or one challenge is that people think all physicians are the same.
329
00:38:29,000 --> 00:38:44,000
You know we all act the same we're so you know we're so different in and not just how we practice but who we are as people what excites us and what wears us down what energizes us what's most important.
330
00:38:44,000 --> 00:38:50,000
So it's hard to come up with blanket statements in terms of this is the specific cause.
331
00:38:50,000 --> 00:38:58,000
I think there are some common themes if you speak to enough physicians who are experiencing burnout that do appear.
332
00:38:58,000 --> 00:39:16,000
One would be just the amount of work, I think clinical medicine is increasingly difficult our patients are suffering from more complex illnesses they more complex lives and social isolation coexisting mood disorders.
333
00:39:16,000 --> 00:39:39,000
Just various challenges and we're helping people stay healthier longer and the consequence of that success is that people are presenting to us with later stage illnesses or conditions that previously may have been treated surgically are now being treated medically and so we have patients who are sicker so I think.
334
00:39:39,000 --> 00:40:02,000
I would say one core reason is that the work is harder largely because people have more complex needs related to that I think we've done a poor job of adapting our systems of care and our care teams to support that additional incremental work that physicians must do.
335
00:40:02,000 --> 00:40:10,000
So I think that's a key factor is this kind of more demands without more support.
336
00:40:10,000 --> 00:40:17,000
So I think that would be my leading reason I think there are plenty of others.
337
00:40:17,000 --> 00:40:26,000
The administrative headaches as everyone's familiar with moral injury we talk a lot about.
338
00:40:26,000 --> 00:40:43,000
I think that there's a feeling that perhaps some of our patients don't value us and what we do. And not only that some are at times openly hostile even confrontational, even violent.
339
00:40:43,000 --> 00:40:58,000
There's a whole slew of different things but I would probably most of all it's the it's the mismatch between the work demands and the support given to do the work.
340
00:40:58,000 --> 00:41:08,000
And what do you think so from what I'm hearing is there's essentially too much to know and too much work to do and then we're not valid enough on top of that.
341
00:41:08,000 --> 00:41:14,000
We're supported enough right too much to do for one physician alone.
342
00:41:14,000 --> 00:41:25,000
Yet that physician is maybe working in an old model where it's just them or just them and a nurse who rooms a patient in there, left to do everything else.
343
00:41:25,000 --> 00:41:30,000
What do you think the solution is here.
344
00:41:30,000 --> 00:41:45,000
I don't know what the solution is. I think that we're very fortunate and privileged to do what we do and how do we return that sense of joy and meaning and purpose to the work.
345
00:41:45,000 --> 00:41:56,000
I think that's ultimately the challenge is there will always be head every job has headaches and frustrations.
346
00:41:56,000 --> 00:42:13,000
Every job or profession has the same meaning and the same stimulation and the same collegiality that careers in medicine do so I think we.
347
00:42:13,000 --> 00:42:27,000
We should work to mitigate some of the some of the daily headaches.
348
00:42:27,000 --> 00:42:41,000
Etc. Etc. But I think more than that I think emphasizing the benefits of medicine emphasizing the joyful aspects of medicine connecting physicians with purpose.
349
00:42:41,000 --> 00:42:47,000
I think that's where I would go now how do you do that.
350
00:42:47,000 --> 00:42:49,000
That's that's a bigger question.
351
00:42:49,000 --> 00:43:00,000
I think one thing and this relates to burnout as well I think one challenge, at least that I've seen over my career is that physicians feel more disconnected from each other and from the system.
352
00:43:00,000 --> 00:43:13,000
We don't see each other as much right where are we're working in larger practices larger systems. So we're more dispersed geographically where our heads are down in our computers or phones.
353
00:43:13,000 --> 00:43:22,000
So we've less time to look up and actually speak to each other either on the phone or a zoom or even better in person right in the physicians lounge.
354
00:43:22,000 --> 00:43:31,000
So, I think community building is really important as a possible antidote to burnout.
355
00:43:31,000 --> 00:43:36,000
Right, we all want to be connected to aspects of life larger than ourselves.
356
00:43:36,000 --> 00:43:42,000
In some it's from the work itself. I think a lot of it is from the organizations we belong to from our colleagues.
357
00:43:42,000 --> 00:43:48,000
So those would be areas that I would prioritize.
358
00:43:48,000 --> 00:43:55,000
You could go back in time 10 years ago and give one piece of advice to your past self. What would you tell him.
359
00:43:55,000 --> 00:44:00,000
Oh, I'll.
360
00:44:00,000 --> 00:44:07,000
I think just self compassion and kindness.
361
00:44:07,000 --> 00:44:18,000
The path as you know anyone who I've knows it's not as as linear and straightforward as maybe we imagine.
362
00:44:18,000 --> 00:44:34,000
So I think giving yourself or I would advise my younger self just to practice self compassion and kindness along the way.
363
00:44:34,000 --> 00:44:37,000
I'm better at that as I've gotten older.
364
00:44:37,000 --> 00:44:39,000
That's good to hear Spencer.
365
00:44:39,000 --> 00:44:47,000
Where did yourself 10 years ago imagine you now and where do you imagine yourself 10 years from now.
366
00:44:47,000 --> 00:44:51,000
I don't know where I imagine myself 10 years ago.
367
00:44:51,000 --> 00:45:07,000
About 10 years ago I started to take on leadership opportunities. I was still pretty involved in research and really fascinated with kind of the science of medicine.
368
00:45:07,000 --> 00:45:17,000
I'm not so sure I plan things out I just kind of follow the interests and have had good mentorship and opportunities to do different things but I.
369
00:45:17,000 --> 00:45:30,000
I don't necessarily think I saw myself doing anything dramatically different. I thought I'd be a doctor and help to help help lead our practice and hopefully contribute to the broader conversation.
370
00:45:30,000 --> 00:45:34,000
And likewise it's, I don't know 10 years from now.
371
00:45:34,000 --> 00:45:39,000
I don't know if I have a specific destination I'm aiming for.
372
00:45:39,000 --> 00:45:45,000
I'm pretty content with what I'm doing of course help to hope to continue to grow.
373
00:45:45,000 --> 00:45:50,000
Meet you know for me at this point stage in my career I really like meeting people.
374
00:45:50,000 --> 00:45:53,000
I like collaborating.
375
00:45:53,000 --> 00:45:56,000
I like learning continually.
376
00:45:56,000 --> 00:45:59,000
I like serving the greater good.
377
00:45:59,000 --> 00:46:07,000
So hopefully doing all that just in some capacity probably a little different from now but hopefully not too different.
378
00:46:07,000 --> 00:46:21,000
So many physicians I talked to struggle with purpose we're putting this treadmill very early on, and we are said you pass this exam and this exam and this evaluation and you get into residency and you get into med school and maybe you get into fellowship passing board exams.
379
00:46:21,000 --> 00:46:27,000
Our purpose is almost defined by these hard metrics were given.
380
00:46:27,000 --> 00:46:37,000
This is a deep question Spencer. So what is how do you define purpose now what is your purpose in life.
381
00:46:37,000 --> 00:46:52,000
In my purpose in life is to be the best person I could be to positively influence others my family, friends, community those I work with and patients I serve.
382
00:46:52,000 --> 00:47:10,000
So that's the fundamental purpose is making positive contributions, not just on a large scale hopefully on a small scale, hopefully in on an everyday basis, making things a little bit better.
383
00:47:10,000 --> 00:47:21,000
I'm also, you know, growth is a is an interest of mine personal growth, learning, learning new things.
384
00:47:21,000 --> 00:47:23,000
Yeah, I think.
385
00:47:23,000 --> 00:47:33,000
Most of all I think meaningful positive contributions and I guess second would be growth.
386
00:47:33,000 --> 00:47:42,000
You know, experiencing new things learning new things. I think those would be core core areas that I emphasize.
387
00:47:42,000 --> 00:47:48,000
Last question. What's your advice to med students right now.
388
00:47:48,000 --> 00:48:05,000
Yeah, my advice to med students is just to do their best and to pace themselves, because it's a long road. Right, it takes a, it takes a long time which is in some ways a challenge but also in some ways in opportunity because you don't need to have it all figured
389
00:48:05,000 --> 00:48:08,000
out when you're in medical school.
390
00:48:08,000 --> 00:48:18,000
We train for so long that we mature as people as we're training and our ideas of who we are and what we want to do evolve.
391
00:48:18,000 --> 00:48:26,000
So I think surrounding yourself with good people, doing your best pacing yourself, being open minded.
392
00:48:26,000 --> 00:48:36,000
I think careers of medicine are wonderful. There's so many paths that physicians can take. And I think just being optimistic about the opportunities and, you know,
393
00:48:36,000 --> 00:48:46,000
making the most of the opportunities because most people don't have the opportunities that we're fortunate to have.
394
00:48:46,000 --> 00:49:01,000
Thank you Spencer. I hope we can all move towards a more compassion for ourselves. And I agree building communities to connect physicians is probably the most realistic solution to the burnout we're facing.
395
00:49:01,000 --> 00:49:08,000
Yeah.
00:00:00,000 --> 00:00:05,800
But I think we need to break free of our current note writing process, right?
2
00:00:05,800 --> 00:00:12,640
We've all for not only have not only has the interface and basic technology not changed
3
00:00:12,640 --> 00:00:19,080
for decades, as you mentioned, but the process, the note writing process, the process of,
4
00:00:19,080 --> 00:00:24,200
you know, subjective, objective assessment plan, that's been around for many decades.
5
00:00:24,200 --> 00:00:29,960
I think it was Dr. I think his name was Dr. Weed created that in the 70s and 90s.
6
00:00:29,960 --> 00:00:38,480
And I think that's a, I think it served its purpose, but you know, rather than using AI
7
00:00:38,480 --> 00:00:44,400
to write the same notes we write now just faster and with less ease, maybe it's time
8
00:00:44,400 --> 00:00:49,680
to rethink how we capture information and do we need to write notes or can we just capture
9
00:00:49,680 --> 00:00:55,840
discrete packets and then later reassemble them kind of instantly based on what we need
10
00:00:55,840 --> 00:00:57,720
at that point in time.
11
00:00:57,720 --> 00:01:04,020
So I think over time, I think we'll move in that direction of like new paradigms for
12
00:01:04,020 --> 00:01:09,880
capturing information, new paradigms for displaying that information, probably moving
13
00:01:09,880 --> 00:01:17,640
beyond the mouse, the keyboard, the screen to more multimodal types of approaches.
14
00:01:17,640 --> 00:01:21,080
Voice would be a big one.
15
00:01:21,080 --> 00:01:26,240
And but yeah, who knows, who knows where we're going with all this.
16
00:01:26,240 --> 00:01:28,680
It's exciting to consider.
17
00:01:28,680 --> 00:01:30,200
Thanks so much for joining me Spencer.
18
00:01:30,200 --> 00:01:32,600
I'm excited to have you here.
19
00:01:32,600 --> 00:01:34,360
Thanks for having me Richard.
20
00:01:34,360 --> 00:01:38,240
To get started, tell me a bit about your undergrad career.
21
00:01:38,240 --> 00:01:43,240
You chose political sciences and then you decided to pivot to medicine, take me back
22
00:01:43,240 --> 00:01:48,600
in that time and help me understand that decision a bit more.
23
00:01:48,600 --> 00:01:56,200
Well, strangely, when I was six or seven years old, my father sat my twin brother and I down
24
00:01:56,200 --> 00:01:59,320
and said you could be a doctor or a lawyer when you get older.
25
00:01:59,320 --> 00:02:01,400
And my twin brother became a lawyer, I became a doctor.
26
00:02:01,400 --> 00:02:06,320
So I think it was in some ways, destined from the start.
27
00:02:06,320 --> 00:02:09,880
When I entered college, I was only 17 years old.
28
00:02:09,880 --> 00:02:15,280
And I thought I want to go into medicine, but I didn't really know and I thought maybe
29
00:02:15,280 --> 00:02:17,160
I should study other things as well.
30
00:02:17,160 --> 00:02:23,240
So I was interested in political science, government, and decided to do that in case
31
00:02:23,240 --> 00:02:29,280
either medicine didn't work out or maybe to make me a little more well rounded.
32
00:02:29,280 --> 00:02:34,520
But yeah, really, I was kind of pre-med from the get go.
33
00:02:34,520 --> 00:02:39,680
I just wasn't 100% sure and I want to learn other things.
34
00:02:39,680 --> 00:02:42,280
Okay.
35
00:02:42,280 --> 00:02:47,520
Let's move forward to you, Perseeing your Masters in Public Health and Masters in Health
36
00:02:47,520 --> 00:02:48,520
Administration.
37
00:02:48,520 --> 00:02:54,920
Why did you decide to do those degrees and how have they helped you?
38
00:02:54,920 --> 00:02:57,920
I just, I like to learn.
39
00:02:57,920 --> 00:03:00,000
I guess I've been in school.
40
00:03:00,000 --> 00:03:03,000
I'm still attached to a university, of course.
41
00:03:03,000 --> 00:03:06,520
So I guess I've been around academic institutions my whole life.
42
00:03:06,520 --> 00:03:13,200
And the first degree, the MPH, was part of a fellowship I did in digestive disease epidemiology
43
00:03:13,200 --> 00:03:19,120
and it came with the opportunity to pursue an MPH at the Carolina School of Public Health,
44
00:03:19,120 --> 00:03:20,120
which is world class.
45
00:03:20,120 --> 00:03:23,880
And it was all paid for, so I figured why not.
46
00:03:23,880 --> 00:03:30,120
I was interested in developing skills, research skills, and understanding methodology.
47
00:03:30,120 --> 00:03:34,640
The second degree came years later when I was working in healthcare already and I was
48
00:03:34,640 --> 00:03:41,400
developing interest in healthcare leadership, operations, administration, and my boss at
49
00:03:41,400 --> 00:03:46,120
the time again kindly offered to pay for me to get a degree at the same school of public
50
00:03:46,120 --> 00:03:47,120
health.
51
00:03:47,120 --> 00:03:51,320
Didn't have to travel, was able to do it on nights and weekends.
52
00:03:51,320 --> 00:03:53,760
So jumped at that opportunity as well.
53
00:03:53,760 --> 00:03:58,120
I can't say that those degrees necessarily shape what I do on a day to day basis.
54
00:03:58,120 --> 00:04:03,160
You know, just like medical school, you go to medical school, they teach you a lot, but
55
00:04:03,160 --> 00:04:06,440
who knows what we actually use when we practice.
56
00:04:06,440 --> 00:04:10,280
But I think it's a way of, or I think at least for me it exposed me to different ways of
57
00:04:10,280 --> 00:04:17,000
thinking, different people, relationships, communities.
58
00:04:17,000 --> 00:04:22,320
So I'm sure it's kind of influenced some of what I do, although I haven't thought of
59
00:04:22,320 --> 00:04:27,520
how it does on a day to day basis.
60
00:04:27,520 --> 00:04:28,520
What are the degrees?
61
00:04:28,520 --> 00:04:33,120
There were fun degrees to get, so I enjoyed doing it.
62
00:04:33,120 --> 00:04:36,040
What are you excited about when it comes to AI and healthcare?
63
00:04:36,040 --> 00:04:37,680
I know there's a lot of talk about AI.
64
00:04:37,680 --> 00:04:44,240
Replacing physicians, augmenting physicians, or up-leveling advanced care providers to provide
65
00:04:44,240 --> 00:04:46,840
the same level of care as physicians.
66
00:04:46,840 --> 00:04:51,440
Yeah, this question, this answer is kind of a cop out, but I've been thinking a lot about
67
00:04:51,440 --> 00:04:57,120
AI for the past several years, especially now that everyone's excited about it, having discussions
68
00:04:57,120 --> 00:04:59,000
like this.
69
00:04:59,000 --> 00:05:05,920
And I've come to settle on the greatest short-term opportunity of AI is not necessarily intrinsic
70
00:05:05,920 --> 00:05:08,320
to the technology.
71
00:05:08,320 --> 00:05:19,160
It's that it gives us a chance or a imperative to look at who we are as physicians, as clinicians,
72
00:05:19,160 --> 00:05:23,840
and what we do and how we could do it better, whether or not doing it better actually involves
73
00:05:23,840 --> 00:05:26,520
AI or any other technology.
74
00:05:26,520 --> 00:05:35,520
So really what excites me most about AI today is the chance to reflect, the chance to consider
75
00:05:35,520 --> 00:05:43,600
how healthcare should be where physicians and other clinicians and healthcare workers
76
00:05:43,600 --> 00:05:48,720
fit into the equation and how we can map out a better way forward.
77
00:05:48,720 --> 00:05:53,760
Of course, I believe that AI is part of the solution in some instances, but I think even
78
00:05:53,760 --> 00:06:02,800
more fundamentally, it's a chance for us to really do some deep contemplation.
79
00:06:02,800 --> 00:06:07,680
The FDA for quite some time has been okay with AI being a black box when it comes to
80
00:06:07,680 --> 00:06:13,320
radiology specifically, but now it's moving more towards clinical decision-making in terms
81
00:06:13,320 --> 00:06:16,240
of clinical decision-making support tools.
82
00:06:16,240 --> 00:06:21,280
Just curious to hear your thoughts on having a black box in that instance, and then where
83
00:06:21,280 --> 00:06:26,120
do you think liability should lie?
84
00:06:26,120 --> 00:06:30,720
And our state medical boards and our colleges in Canada have been very clear that liability
85
00:06:30,720 --> 00:06:34,200
lies with the physician using the AI.
86
00:06:34,200 --> 00:06:37,560
And I know you talked about this recently on LinkedIn as well, so I'd love to get your
87
00:06:37,560 --> 00:06:40,400
thoughts on that.
88
00:06:40,400 --> 00:06:45,360
Explainability is a key area to consider.
89
00:06:45,360 --> 00:06:49,400
We of course have to preface this by saying we don't understand how we come to many of
90
00:06:49,400 --> 00:06:55,360
the decisions we make when we're working completely independently.
91
00:06:55,360 --> 00:06:58,560
We don't know how many of the medications we prescribe work.
92
00:06:58,560 --> 00:07:04,240
We don't know how some of them we arrive at some of our diagnoses.
93
00:07:04,240 --> 00:07:10,720
So on the one hand, you can argue that the human mind is a bit of a black box as well.
94
00:07:10,720 --> 00:07:17,400
What's different with AI, of course, is computers are hard to hold accountable.
95
00:07:17,400 --> 00:07:25,520
If you made a diagnosis that was completely wrong and negligent, your board or your hospital
96
00:07:25,520 --> 00:07:29,960
or even the patient could have some recourse.
97
00:07:29,960 --> 00:07:33,280
And even aside from that, I think malpractice concerns are way overblown.
98
00:07:33,280 --> 00:07:39,680
I think we intrinsically feel we want to do what's right for people as physicians, and
99
00:07:39,680 --> 00:07:46,960
we feel that responsibility, that oath we take, do no harm first.
100
00:07:46,960 --> 00:07:53,120
So while we cannot explain many of the decisions we personally make and we can be a bit of
101
00:07:53,120 --> 00:07:59,320
a black box on the flip side, at least there's a level of accountability and hopefully alignment
102
00:07:59,320 --> 00:08:03,040
with the best interests of those we're taken care of.
103
00:08:03,040 --> 00:08:09,480
So when we move to AI, some of it, of course, is explainable.
104
00:08:09,480 --> 00:08:12,640
We conflate AI as being everything, right?
105
00:08:12,640 --> 00:08:17,320
But there is a lot of AI that is explainable that's very rules-based, but I think what
106
00:08:17,320 --> 00:08:21,520
you're alluding to are more of the machine learning models, and especially the generative
107
00:08:21,520 --> 00:08:28,520
AI, where detecting some patterns, creating some patterns, we don't know how it got there.
108
00:08:28,520 --> 00:08:34,040
And I think that is a key challenge for practicing medicine.
109
00:08:34,040 --> 00:08:37,120
Can we just accept the black box or not?
110
00:08:37,120 --> 00:08:42,520
What degree of confidence do we have in the output?
111
00:08:42,520 --> 00:08:48,200
And I do think there needs to be regulation because individual clinicians lack the expertise
112
00:08:48,200 --> 00:08:50,640
to evaluate or the sample size, right?
113
00:08:50,640 --> 00:08:56,480
Maybe if you're personally using it in your practice, working in a nursing home, maybe
114
00:08:56,480 --> 00:09:00,920
spit out three correct differential diagnoses in a row, but that's only three, right?
115
00:09:00,920 --> 00:09:03,200
You're bound to butt up against some limits.
116
00:09:03,200 --> 00:09:07,440
So I think explainability is a key challenge.
117
00:09:07,440 --> 00:09:08,800
I'm not sure what the answer is.
118
00:09:08,800 --> 00:09:14,560
I'm not a regulatory expert, but I think there needs to be some degree of regulation to ensure
119
00:09:14,560 --> 00:09:22,040
that the tools that we're using, we can trust, especially because we can't necessarily figure
120
00:09:22,040 --> 00:09:26,120
out how they arrive at their answers.
121
00:09:26,120 --> 00:09:32,880
Our human nature pulls us towards wanting a deterministic model of healthcare, specifically
122
00:09:32,880 --> 00:09:39,000
when we're faced with diagnoses which have a bigger impact on our health, like cancer,
123
00:09:39,000 --> 00:09:41,160
we want more surety.
124
00:09:41,160 --> 00:09:42,640
Medicine just doesn't work like that.
125
00:09:42,640 --> 00:09:47,880
It works on probabilities and AI, especially charity of AI, is in line with the probabilistic
126
00:09:47,880 --> 00:09:49,840
models of medicine.
127
00:09:49,840 --> 00:09:54,520
The big gap here is patients don't know this, and we don't share this data with patients
128
00:09:54,520 --> 00:10:01,720
because we think it will increase patient anxiety and in practice anecdotally it does.
129
00:10:01,720 --> 00:10:06,940
Do you think a better model of medicine is with the probabilities and the confidence
130
00:10:06,940 --> 00:10:10,240
intervals to an extent, or maybe just the probabilities because it's easier for patients
131
00:10:10,240 --> 00:10:12,520
to understand, are shared openly?
132
00:10:12,520 --> 00:10:16,680
For example, if you come in with a strep swab and we do a swab, if it's positive, there's
133
00:10:16,680 --> 00:10:19,320
95% chance you have strep one in 20.
134
00:10:19,320 --> 00:10:20,320
You don't.
135
00:10:20,320 --> 00:10:23,840
If we don't treat it, there's a one in 10,000 chance we'll progress to an abscess, one in
136
00:10:23,840 --> 00:10:25,760
a million, you'll die if we treat it.
137
00:10:25,760 --> 00:10:30,080
Three in a million chance you'll get rheumatic heart disease.
138
00:10:30,080 --> 00:10:38,000
Do you think this complete transparent way of medicine is just too much, or do you think
139
00:10:38,000 --> 00:10:42,320
this is where AI can really come in and spit out these probabilities?
140
00:10:42,320 --> 00:10:47,800
And really we can move towards patient-centered medicine where the eventual goal would be,
141
00:10:47,800 --> 00:10:51,440
you know, in 500 years the patient is their own doctor.
142
00:10:51,440 --> 00:10:54,480
It's hard to think about 500 years from now.
143
00:10:54,480 --> 00:11:01,160
I think that medicine is much less certain than people think it is.
144
00:11:01,160 --> 00:11:07,800
And we think, or at least the general public often thinks there's one right answer for
145
00:11:07,800 --> 00:11:10,360
each problem.
146
00:11:10,360 --> 00:11:14,640
And the strep throat example is a good one because strep throat actually, we kind of
147
00:11:14,640 --> 00:11:17,960
have pretty good information, right?
148
00:11:17,960 --> 00:11:19,120
Swab the patient.
149
00:11:19,120 --> 00:11:21,000
Is it positive or is it negative?
150
00:11:21,000 --> 00:11:24,960
If it's positive maybe you have to worry about colonization and young children, but for the
151
00:11:24,960 --> 00:11:27,680
most part you can say, okay, this is strep throat.
152
00:11:27,680 --> 00:11:33,000
You have these symptoms, you have a positive test, and then there's loads of evidence in
153
00:11:33,000 --> 00:11:36,640
terms of the best treatments for patients.
154
00:11:36,640 --> 00:11:42,360
And as you mentioned, the purpose of treatment, both to resolve symptoms and to prevent complications.
155
00:11:42,360 --> 00:11:46,720
But most things in medicine are not as straightforward as strep throat.
156
00:11:46,720 --> 00:11:54,360
You think of things, you know, you mentioned cancer, cancer is not one thing.
157
00:11:54,360 --> 00:12:02,600
You know, even people with the same type of cancer, the same stage of cancer, there may
158
00:12:02,600 --> 00:12:09,280
be different diseases, different biology, certainly different preferences, different
159
00:12:09,280 --> 00:12:12,200
goals of care.
160
00:12:12,200 --> 00:12:18,400
So it's hard to, on the one hand, I think we must admit that medicine is much less certain
161
00:12:18,400 --> 00:12:20,440
than people realize.
162
00:12:20,440 --> 00:12:27,200
On the flip side, I think it's hard to have those conversations because it assumes that
163
00:12:27,200 --> 00:12:30,840
A, the patient we're working with wants to hear that.
164
00:12:30,840 --> 00:12:39,000
B, they have the interest, time, and capacity to make sense of the information that we present.
165
00:12:39,000 --> 00:12:45,000
So I think it's interesting what you suggest that perhaps you may either interspersed and
166
00:12:45,000 --> 00:12:51,000
help people understand how to listen, how to take risks, and how to use that to help
167
00:12:51,000 --> 00:12:52,000
people take drugs.
168
00:12:52,000 --> 00:12:57,000
And that's something that I've long thought of as a problem in the US.
169
00:12:57,000 --> 00:13:00,000
So, you've also been to a financial planner.
170
00:13:00,000 --> 00:13:08,000
So, in the old days, you went to a financial planner, they picked stocks for you, right?
171
00:13:08,000 --> 00:13:13,000
Like, it wasn't really that scientific, they picked stocks, they told you how much you
172
00:13:13,000 --> 00:13:17,000
should save, maybe somehow diversify your portfolio.
173
00:13:17,000 --> 00:13:24,000
Now, if you go to many of the larger financial planners, they'll run probabilistic models.
174
00:13:24,000 --> 00:13:29,000
If you, Alec, they'll figure out what your risk tolerance is, what your goals are, they'll
175
00:13:29,000 --> 00:13:34,000
try to quantify that using questionnaires, and then they can run models that show if
176
00:13:34,000 --> 00:13:42,000
you went this level of aggressiveness, these are the potential outcomes in 10, 20, 30 years.
177
00:13:42,000 --> 00:13:46,000
If you went with this strategy, these are the potential outcomes.
178
00:13:46,000 --> 00:13:53,000
So I think that's a potential way that we can use AI to better explain decision or the
179
00:13:53,000 --> 00:13:56,000
implications of decisions and help guide people.
180
00:13:56,000 --> 00:14:01,000
But as you know, some people just, I trust you, Doc, do what you think.
181
00:14:01,000 --> 00:14:03,000
And other people would want that.
182
00:14:03,000 --> 00:14:06,000
So, yeah, I do think there's an opportunity there.
183
00:14:06,000 --> 00:14:13,000
And I think it'll take time to figure out how we harness these tools, of course, but I do think there's some possibility.
184
00:14:13,000 --> 00:14:16,000
Let's move on to technology.
185
00:14:16,000 --> 00:14:20,000
The way we interact with technology really hasn't changed in about 30 years.
186
00:14:20,000 --> 00:14:25,000
We still use a screen, we still use a mouse, we still use a keyboard.
187
00:14:25,000 --> 00:14:34,000
I think the future of EHR is more like in minority report because where our interaction with the patient is preserved.
188
00:14:34,000 --> 00:14:38,000
And we're not distracted by a mouse or a screen or a keyboard.
189
00:14:38,000 --> 00:14:40,000
It's something more natural.
190
00:14:40,000 --> 00:14:42,000
How do you think of the future of EHRs?
191
00:14:42,000 --> 00:14:45,000
How do you think of the UI UX in that case?
192
00:14:45,000 --> 00:14:53,000
And what are some, what's two things you would change about the EHR if you could?
193
00:14:53,000 --> 00:15:00,000
You know, all those questions need a follow up question in which is what is the time horizon, right?
194
00:15:00,000 --> 00:15:16,000
I think at least in the US health systems have invested billions, individual health systems have invested hundreds of millions, but collectively many billions of dollars in adopting enterprise-wide electronic health records.
195
00:15:16,000 --> 00:15:19,000
And physician practices have as well.
196
00:15:19,000 --> 00:15:31,000
And if you've been part of that change, you realize how difficult it is and how unlikely it is that they want to go through that exercise anytime soon.
197
00:15:31,000 --> 00:15:39,000
Especially because there's so much of how health systems and practices run now are really centered around that technology.
198
00:15:39,000 --> 00:15:48,000
So I think over the short term, we're very unlikely to see moving towards newer electronic health records.
199
00:15:48,000 --> 00:15:58,000
I think we're more, much more likely to see layering tools on top of the EHR that can help with some of the challenges you mentioned.
200
00:15:58,000 --> 00:16:14,000
For instance, I'm sure your listeners know this most probably the widest clinical use of AI in healthcare right now, at least newer AI, not kind of rules based expert systems, etc.
201
00:16:14,000 --> 00:16:19,000
Is AI scribing ambient intelligence for clinical documentation, right?
202
00:16:19,000 --> 00:16:35,000
So liberating physicians from the keyboard from the mouse from the screen so they can just speak to patients, have a conversation and voila, the note appears in their inbox for them to edit and finalize.
203
00:16:35,000 --> 00:16:47,000
So that's an example, I think of how we will layer technology on top of the EHR rather than kind of scrap the existing EHRs over the short run.
204
00:16:47,000 --> 00:16:52,000
Another example, I think a good one would be a summarization tools, right?
205
00:16:52,000 --> 00:17:14,000
All this not only are we overloaded with information in healthcare and in the EHRs, but the information scattered in many different locations. So trying to figure out what went on when is often very difficult, takes a lot of time, sometimes, sometimes riddled with errors and limitations.
206
00:17:14,000 --> 00:17:29,000
So again, so layering on a summarization tool and AI tool that can summarize the existing EHR data on a specific patient or on populations of patients can help kind of overcome some of those challenges.
207
00:17:29,000 --> 00:17:38,000
So I think over the short term, that's what we're more likely to see. We're more likely to see some incremental benefits by layering AI on top.
208
00:17:38,000 --> 00:17:43,000
I think the longer term question is interesting and obviously purely speculation.
209
00:17:43,000 --> 00:18:07,000
But I think we need to break free of our current note writing process, right? We've all for not only have, not only has the interface and basic technology not changed for decades, as you mentioned, but the note writing process, the process of, you know, subjective, objective assessment plan, that's been around for many decades.
210
00:18:07,000 --> 00:18:15,000
I think it was Dr. I think his name was Dr. Weed created that in the 70s and 80s.
211
00:18:15,000 --> 00:18:27,000
And I think that's a, I think it served its purpose, but you know, rather than using AI to write the same notes we write now just faster and with less ease.
212
00:18:27,000 --> 00:18:41,000
Maybe it's time to rethink how we capture information and do we need to write notes or can we just capture discrete packets and then later reassemble them kind of instantly based on what we need at that point in time.
213
00:18:41,000 --> 00:19:00,000
So over time, I think we'll move in that direction of like new paradigms for capturing information, new paradigms for displaying that information, probably moving beyond the mouse, the keyboard, the screen to more multimodal types of approaches.
214
00:19:00,000 --> 00:19:04,000
Voice would be a big one.
215
00:19:04,000 --> 00:19:12,000
And, but yeah, who knows who knows where we're going with all this. It's, it's exciting to consider.
216
00:19:12,000 --> 00:19:25,000
I cannot agree more with you, even in my almost 10 years of clinical practice, I've seen a shift from where notes used to be to outline our clinical thinking and for colleagues to follow us on.
217
00:19:25,000 --> 00:19:36,000
And they have shifted to be more of a tool of increasing billing and reducing liability where we see these notes, which are pages and pages long with just repeated information.
218
00:19:36,000 --> 00:19:54,000
Do you think our clinical encounters and notes need to sit somewhere that is removed from billing and liability to move forward into a recording process that actually captures our clinical decision making.
219
00:19:54,000 --> 00:20:12,000
Well, well, a few reflections. One is the note writing process. Yes, it serves a purpose for capturing information so your colleagues can read what you did and for billing purposes but one, in my opinion, this is something I often talk about.
220
00:20:12,000 --> 00:20:26,000
The note writing process is very valuable for the physician. It's an opportunity for us to synthesize information, clarify our thoughts and explicitly state what we think is going on.
221
00:20:26,000 --> 00:20:37,000
And I practiced, I was in medical school in the late 90s, early 2000s residency fellowship and the early 2000s. So I practiced for a little while on paper.
222
00:20:37,000 --> 00:20:51,000
And I wouldn't, we shouldn't pretend like those days were perfect because there were challenges, most of all getting information, right finding information had to walk to medical records or get faxes now everything is instantly available.
223
00:20:51,000 --> 00:21:02,000
But people were much physicians are much more selective with what they put in their note because they had to write it by hand. You couldn't just pull everything in with, you know, one keystroke.
224
00:21:02,000 --> 00:21:18,000
So the notes were harder to read but I think they're actually and harder to access, but I think they probably were higher in quality it's like that apocryphal Mark Twain quote that he would have written a shorter letter if he had more time I think people put more energy and effort into the notes.
225
00:21:18,000 --> 00:21:21,000
And it was really, you know, it reflected clear thinking.
226
00:21:21,000 --> 00:21:30,000
So I think that's one thing we should keep in mind with notes is the note should actually serves a purpose beyond billing.
227
00:21:30,000 --> 00:21:37,000
Clearly communicating with your colleagues with your future self but also it's, it's part of the act of thinking.
228
00:21:37,000 --> 00:21:45,000
So I would love for note writing to focus more on that and to be free more of some of the extraneous uses that we use it for.
229
00:21:45,000 --> 00:22:02,000
In the US, the, the E&M billing guidelines actually changed a few years ago where in the old days, there was a menu of requirements in order to bill different levels of service.
230
00:22:02,000 --> 00:22:18,000
Okay, so if you saw a patient in clinic, you had to, you know, state what their history their present illnesses, maybe their past medical history their family history, a 10 point review a systems document of certain level physical examination, certain
231
00:22:18,000 --> 00:22:26,000
level of medical decision making you added all that up and that determined what you were able to bill was at a level five console to level four, etc.
232
00:22:26,000 --> 00:22:44,000
Well, a few years ago, the government changed those regulations and started allowing physicians to bill based on time, based on the time they spent with the patient, including the time before and after the visit on the same day so pre charting,
233
00:22:44,000 --> 00:22:48,000
reviewing images, etc, etc.
234
00:22:48,000 --> 00:23:06,000
So, we actually now have the opportunity to break away from that old system, but what's ironic is that you physicians have and if you look at least epic last summer published data on this, our notes are actually longer, since those new regulations went into place.
235
00:23:06,000 --> 00:23:19,000
And then they were before. So I think it's easy for us to complain oh insurers oh the government they want us to do all this. When in fact often the challenges we need to change our behavior.
236
00:23:19,000 --> 00:23:35,000
Right. So, I would love to see note writing, go back more towards its purpose of an exercise and thinking and clear communication limited, limiting the amount that we're writing but, you know,
237
00:23:35,000 --> 00:23:39,000
explaining what we think is going on and then in the future.
238
00:23:39,000 --> 00:23:51,000
I think there's a great opportunity for software to summarize different packets of information to meet, you know, the need at that specific time.
239
00:23:51,000 --> 00:23:56,000
I don't know if I answered your question or went off on too long of a tangent but let me know.
240
00:23:56,000 --> 00:24:06,000
I'm going to share a phrase Charlie Munger show me the incentives I'll show you the behavior. If you had a magic wand, and you could change the billing you could change the regulations.
241
00:24:06,000 --> 00:24:21,000
What would you change so note writing moves towards the type of notes where you imagine which are, you know, we are not compromising brevity for clarity.
242
00:24:21,000 --> 00:24:40,000
I think it's already happened as I mentioned I mean there still are other, you know, risk scoring there still are other aspects I won't pretend like everything is fixed but time based billing that's that lets everyone off the hook I spent 12 minutes with the patient I spent four minutes before and after the visit.
243
00:24:40,000 --> 00:24:52,000
And that relates into this level of service and then just write what I want to write so. So that in part, I think it's some of it clearly you know when you mentioned incentives, the incentive problem.
244
00:24:52,000 --> 00:25:02,000
In my mind, the main challenges in healthcare are behavioral and behavior change and why do people be change behaviors as you mentioned Charlie Munger's quote.
245
00:25:02,000 --> 00:25:14,000
So, you know, there's a lot of difficulty there needs to be some sort of incentive to change behavior. And we're mostly stuck in a system where the incentives are misaligned between the different parties.
246
00:25:14,000 --> 00:25:31,000
It's cliche of course but we're still mostly paid based on the amount the volume of things we do more than the value of the services we provide. Of course it's hard to actually figure out value that's one thing that one key challenge to that movement.
247
00:25:31,000 --> 00:25:43,000
I think, I think trusting or finding other ways to assess the amount of work done besides the length of the note is a major positive.
248
00:25:43,000 --> 00:25:54,000
But physicians still need to find ways to change their behavior get out of their old way of doing things in order for it to make a difference.
249
00:25:54,000 --> 00:26:02,000
So, the thoughts in the pay wider model of healthcare with the pair also delivers the services, like the case or permanent model.
250
00:26:02,000 --> 00:26:09,000
Do you think that's a better model versus where the payer and service provider are separate.
251
00:26:09,000 --> 00:26:13,000
I think it depends is the answer.
252
00:26:13,000 --> 00:26:21,000
I think the Kaiser model is is wonderful. I think that the incentives are aligned.
253
00:26:21,000 --> 00:26:27,000
The physicians who work there generally are pretty happy now they're self selected group.
254
00:26:27,000 --> 00:26:32,000
I think the patients in general are happy not always.
255
00:26:32,000 --> 00:26:41,000
I think it's a nicely aligned system that frees physicians from having to do things like document extraneous information.
256
00:26:41,000 --> 00:26:47,000
Worry about how am I providing care I don't want to provide care over a patient portal because I'm not getting paid to do that.
257
00:26:47,000 --> 00:26:52,000
So I think there are many benefits to that sort of model.
258
00:26:52,000 --> 00:26:57,000
Now the flip side is Kaiser has had struggle expanding to other areas.
259
00:26:57,000 --> 00:27:12,000
They're, you know, they're largely in California, they're little pockets and Hawaii, Colorado, Colorado, Pacific Northwest, they're embarking on this new program as you probably are aware to expand to the East Coast through partnerships with guys in Go.
260
00:27:12,000 --> 00:27:17,000
And I think that's been announced when yesterday Moses Cohen health here in North Carolina.
261
00:27:17,000 --> 00:27:20,000
So we'll see if it if it can scale and spread.
262
00:27:20,000 --> 00:27:31,000
But there's something unique about Kaiser and that where it developed when it developed that's allowed it to stick that hasn't been replicated at scale and other places.
263
00:27:31,000 --> 00:27:38,000
But I do think that's a nice model I think I personally I'm a salary physician I work for a university.
264
00:27:38,000 --> 00:27:46,000
I tell my patients all the time I want to do it right for you I'm not getting paid an extra dollar whether I'm whether you do this test or not.
265
00:27:46,000 --> 00:27:57,000
I have zero incentive to do more than someone needs I just want to do the right thing and for me personally it's so it's a.
266
00:27:57,000 --> 00:27:59,000
It's a wonderful way to practice.
267
00:27:59,000 --> 00:28:13,000
And but I realize it's a it's a privilege to be able to practice in this type of environment so yeah I think ideally there is aligned incentives where there's alignment between those paying for the care and those providing the care.
268
00:28:13,000 --> 00:28:18,000
I don't know if it needs to be as integrated as a Kaiser.
269
00:28:18,000 --> 00:28:24,000
Or the VA is another example here in the US that's even goes farther than Kaiser.
270
00:28:24,000 --> 00:28:28,000
But yeah, I do think aligning incentives are critical.
271
00:28:28,000 --> 00:28:43,000
That's not to say that there are there are plenty of amazing practices nationwide that are followed different models so it's not to say it's the only option but I do think there's something very attractive to it, at least personally.
272
00:28:43,000 --> 00:28:48,000
If a startup was working on an AI scribe or a summarization tool.
273
00:28:48,000 --> 00:28:52,000
What would your diligence process be for onboarding it.
274
00:28:52,000 --> 00:28:57,000
What would make you pick one startup versus the other.
275
00:28:57,000 --> 00:29:03,000
I think the, the scribe tool and that's that's a busy market right now.
276
00:29:03,000 --> 00:29:08,000
I think with those tools they're becoming largely commoditized.
277
00:29:08,000 --> 00:29:22,000
And it's mostly a competition based on cost that's not to say that there are no features of the products that could differentiate but I think there's it's such a crowded marketplace that
278
00:29:22,000 --> 00:29:27,000
at the fair minimum they'll need to be able to integrate with the large HR's.
279
00:29:27,000 --> 00:29:34,000
They'll need to have a decent interface, you know, it must be usable.
280
00:29:34,000 --> 00:29:42,000
They'll need to have decent performance on, you know, assessments in terms of the fidelity of the note that's being generated.
281
00:29:42,000 --> 00:29:51,000
But I think ultimately it's going to come down to price largely until those scribe companies can move into other areas, you know, right now they're point solutions they're doing one thing.
282
00:29:51,000 --> 00:29:55,000
And there are a lot of companies that can do that one thing fairly well.
283
00:29:55,000 --> 00:30:04,000
So I think in that case it becomes largely a largely a cost question.
284
00:30:04,000 --> 00:30:15,000
In terms of summarization there are some companies in that area as well they're not as many, not as many good ones that I'm familiar with and I think I have a pretty good understanding of what's happening.
285
00:30:15,000 --> 00:30:27,000
I think, again, similarly, I think it's important to be able to do more than just one thing summarization. I look at summarization is almost more of a core competency than a.
286
00:30:27,000 --> 00:30:43,000
I'm not sure I'm not necessarily sure it's a product it's, it's a it's a it's a capability that I think links to so many different things that we do including note writing, including billing, including risk scoring, including discharge summary.
287
00:30:43,000 --> 00:30:49,000
I mean, so I look at summarization almost a little bit more as a more fundamental technology.
288
00:30:49,000 --> 00:30:57,000
And to me the question is what are the products that emerge from that. And what's the business case around around that.
289
00:30:57,000 --> 00:31:05,000
If you can do a great chart summary well. Okay, well, who's who's going to pay for that. And why, and how do you prove the value of that.
290
00:31:05,000 --> 00:31:17,000
So those are some of my thoughts I'm very, I'm very excited about summarization I think it's, it's necessary based on some of the things we were discussing about EHR with information overload and scatter.
291
00:31:17,000 --> 00:31:28,000
But I think there's a little bit more flushing out to do and to determine what are the actual products here that we're using this technology for.
292
00:31:28,000 --> 00:31:38,000
If EHR had a summarization tool built in and one dent, would that move the needle for you to switch EHRs.
293
00:31:38,000 --> 00:31:44,000
Yeah, I mean, as I mentioned earlier, I think few large organizations are switching EHRs anytime soon.
294
00:31:44,000 --> 00:32:00,000
These EHRs are tremendous investment that's been put into them. And there are so many different functions outside of what a physician does that they're used for that I don't think a summarization tool is enough to get all the physicians showing up with pitchforks to their
295
00:32:00,000 --> 00:32:10,000
to their IT departments and their CFO and saying we must switch now. And even if they did I don't know I'm not sure that that those decision makers would listen.
296
00:32:10,000 --> 00:32:33,000
I do think it's a valuable, a valuable feature that we increasingly need in EHRs and I, I think many EHR companies are working to build those that those tools themselves into their EHRs and you know the flip side is other companies are working to build applications that can be layered and integrated with the
297
00:32:33,000 --> 00:32:52,000
EHR. So, I think we're moving towards a world where either the EHR providers give that or the EHR vendors provide that or, and or there are startups or, you know, growth stage mature companies that are providing that as a service.
298
00:32:52,000 --> 00:33:08,000
The main issue I have with Valley Base Care is values measured by outcomes as we know the outcomes are determined by SC, status and race and gender. They're not determined by pharmacological or even behavioral intervention at all times.
299
00:33:08,000 --> 00:33:25,000
Do you think Valley Base Care should move towards the process where values measured by the process and all the outcome then Adam Granz talks about this, you know incentivize outcomes you people will cheat the system to for to get those outcomes if you incentivize someone to lose
300
00:33:25,000 --> 00:33:32,000
weight or a strict BMI they will starve themselves, as opposed to incentivizing them to eat healthy and exercise.
301
00:33:32,000 --> 00:33:47,000
Do you think there's some way we can move towards measuring the process because it's easier to measure the outcomes. That's the lazy way to measure value. I feel I would argue it's easier to measure process than outcomes if you look at the, you know, there are at least the
302
00:33:47,000 --> 00:34:07,000
earlier versions of the quality movement. You know, it was all about process measures. Right. Patient has this condition. Are you vaccinating them. Are you checking these labs periodically are you doing screening them for osteoporosis simple yes note checks and I think we're still largely stuck with those types of process
303
00:34:07,000 --> 00:34:27,000
measures in medicine outcomes are harder to assess. They don't write it takes time to see an outcome some outcomes maybe are clearly visible within 30 days but many are, you know, months to more likely years away.
304
00:34:27,000 --> 00:34:42,000
And that assumes someone staying in the same health system you have longitudinal records, they're staying with the same health insurer. So I think process measures at least in my experience here in the US tend to be relied on.
305
00:34:42,000 --> 00:34:58,000
And then the question is, what does that mean, right what's the, what's the, I don't say, what's the value of all these process measures. Hard to say, hard to say. I used to be a used to say I used to be a zealot equality measurement zealot.
306
00:34:58,000 --> 00:35:12,000
I participated in a lot of measure development rounds with my professional society and I was just really into it and then one day it dawned on me that it's really, really hard to measure quality.
307
00:35:12,000 --> 00:35:15,000
You know, it's one of those. I know it when I see it.
308
00:35:15,000 --> 00:35:37,000
It's just, it's hard to quantify this not saying we shouldn't try but it's just, it's really hard so we wind up moving towards select conditions that we can quantify value for quality for and then we select measures usually process measures and it makes me wonder how much or how good of a job are we actually doing.
309
00:35:37,000 --> 00:35:41,000
So, there's some thoughts.
310
00:35:41,000 --> 00:35:51,000
In your own department with the physicians you lead. How do you measure their quality how much do you rely on intuition and how much do you rely on structure.
311
00:35:51,000 --> 00:35:55,000
Largely intuition, largely professionalism.
312
00:35:55,000 --> 00:36:00,000
Peer review.
313
00:36:00,000 --> 00:36:04,000
Simple process measures like board certification.
314
00:36:04,000 --> 00:36:14,000
So it's more process measure based and intuition there are some kind of more outcome so in my field in gastroenterology we have one.
315
00:36:14,000 --> 00:36:30,000
In my opinion we have one excellent quality measure it's add an add anoma detection rate and add anoma is a precancerous polyp the purpose of doing a screening colonoscopy is to find these precancerous polyps to remove them before they could become anything harmful.
316
00:36:30,000 --> 00:36:36,000
And to risk stratify people that if they're likely to grow these things we're going to see them more often.
317
00:36:36,000 --> 00:36:45,000
And so, there's a measure called an add anoma detection rate which is of all the screening colonoscopies I do.
318
00:36:45,000 --> 00:36:51,000
What percentage of the time am I finding one of these precancerous polyps.
319
00:36:51,000 --> 00:36:58,000
So, that's something we pay a lot of attention to and in my practice.
320
00:36:58,000 --> 00:37:07,000
Because we believe it's valid we believe it's it's something that we can affect we we can show the variation across you know our 35 plus gastroenterologists.
321
00:37:07,000 --> 00:37:14,000
So, but that's they're few and far between, you know they're few and far between.
322
00:37:14,000 --> 00:37:17,000
What are your thoughts on capsule endoscopy.
323
00:37:17,000 --> 00:37:20,000
It's been around for quite some time now.
324
00:37:20,000 --> 00:37:28,000
But now we're seeing somewhat of a resurgence but startups getting into space as well.
325
00:37:28,000 --> 00:37:49,000
I'm sorry my dog's barking in the background. Capsul endoscopy is a useful tool if you use it for the right purpose. You know usually the small bowel despite its name is quite long and a small proportion of patients have GI issues that are not related to their upper GI tract or to their colon.
326
00:37:49,000 --> 00:38:10,000
It's possible lesion is in the small bowel so capsule endoscopy is a wonderful way of evaluating the small bowel, but it's not necessarily new and I'm not familiar with some of these innovations that you're describing.
327
00:38:10,000 --> 00:38:15,000
Why do you think physicians are burnt out.
328
00:38:15,000 --> 00:38:29,000
Yeah, that's a great question. And there's not just one answer right I think one problem or one challenge is that people think all physicians are the same.
329
00:38:29,000 --> 00:38:44,000
You know we all act the same we're so you know we're so different in and not just how we practice but who we are as people what excites us and what wears us down what energizes us what's most important.
330
00:38:44,000 --> 00:38:50,000
So it's hard to come up with blanket statements in terms of this is the specific cause.
331
00:38:50,000 --> 00:38:58,000
I think there are some common themes if you speak to enough physicians who are experiencing burnout that do appear.
332
00:38:58,000 --> 00:39:16,000
One would be just the amount of work, I think clinical medicine is increasingly difficult our patients are suffering from more complex illnesses they more complex lives and social isolation coexisting mood disorders.
333
00:39:16,000 --> 00:39:39,000
Just various challenges and we're helping people stay healthier longer and the consequence of that success is that people are presenting to us with later stage illnesses or conditions that previously may have been treated surgically are now being treated medically and so we have patients who are sicker so I think.
334
00:39:39,000 --> 00:40:02,000
I would say one core reason is that the work is harder largely because people have more complex needs related to that I think we've done a poor job of adapting our systems of care and our care teams to support that additional incremental work that physicians must do.
335
00:40:02,000 --> 00:40:10,000
So I think that's a key factor is this kind of more demands without more support.
336
00:40:10,000 --> 00:40:17,000
So I think that would be my leading reason I think there are plenty of others.
337
00:40:17,000 --> 00:40:26,000
The administrative headaches as everyone's familiar with moral injury we talk a lot about.
338
00:40:26,000 --> 00:40:43,000
I think that there's a feeling that perhaps some of our patients don't value us and what we do. And not only that some are at times openly hostile even confrontational, even violent.
339
00:40:43,000 --> 00:40:58,000
There's a whole slew of different things but I would probably most of all it's the it's the mismatch between the work demands and the support given to do the work.
340
00:40:58,000 --> 00:41:08,000
And what do you think so from what I'm hearing is there's essentially too much to know and too much work to do and then we're not valid enough on top of that.
341
00:41:08,000 --> 00:41:14,000
We're supported enough right too much to do for one physician alone.
342
00:41:14,000 --> 00:41:25,000
Yet that physician is maybe working in an old model where it's just them or just them and a nurse who rooms a patient in there, left to do everything else.
343
00:41:25,000 --> 00:41:30,000
What do you think the solution is here.
344
00:41:30,000 --> 00:41:45,000
I don't know what the solution is. I think that we're very fortunate and privileged to do what we do and how do we return that sense of joy and meaning and purpose to the work.
345
00:41:45,000 --> 00:41:56,000
I think that's ultimately the challenge is there will always be head every job has headaches and frustrations.
346
00:41:56,000 --> 00:42:13,000
Every job or profession has the same meaning and the same stimulation and the same collegiality that careers in medicine do so I think we.
347
00:42:13,000 --> 00:42:27,000
We should work to mitigate some of the some of the daily headaches.
348
00:42:27,000 --> 00:42:41,000
Etc. Etc. But I think more than that I think emphasizing the benefits of medicine emphasizing the joyful aspects of medicine connecting physicians with purpose.
349
00:42:41,000 --> 00:42:47,000
I think that's where I would go now how do you do that.
350
00:42:47,000 --> 00:42:49,000
That's that's a bigger question.
351
00:42:49,000 --> 00:43:00,000
I think one thing and this relates to burnout as well I think one challenge, at least that I've seen over my career is that physicians feel more disconnected from each other and from the system.
352
00:43:00,000 --> 00:43:13,000
We don't see each other as much right where are we're working in larger practices larger systems. So we're more dispersed geographically where our heads are down in our computers or phones.
353
00:43:13,000 --> 00:43:22,000
So we've less time to look up and actually speak to each other either on the phone or a zoom or even better in person right in the physicians lounge.
354
00:43:22,000 --> 00:43:31,000
So, I think community building is really important as a possible antidote to burnout.
355
00:43:31,000 --> 00:43:36,000
Right, we all want to be connected to aspects of life larger than ourselves.
356
00:43:36,000 --> 00:43:42,000
In some it's from the work itself. I think a lot of it is from the organizations we belong to from our colleagues.
357
00:43:42,000 --> 00:43:48,000
So those would be areas that I would prioritize.
358
00:43:48,000 --> 00:43:55,000
You could go back in time 10 years ago and give one piece of advice to your past self. What would you tell him.
359
00:43:55,000 --> 00:44:00,000
Oh, I'll.
360
00:44:00,000 --> 00:44:07,000
I think just self compassion and kindness.
361
00:44:07,000 --> 00:44:18,000
The path as you know anyone who I've knows it's not as as linear and straightforward as maybe we imagine.
362
00:44:18,000 --> 00:44:34,000
So I think giving yourself or I would advise my younger self just to practice self compassion and kindness along the way.
363
00:44:34,000 --> 00:44:37,000
I'm better at that as I've gotten older.
364
00:44:37,000 --> 00:44:39,000
That's good to hear Spencer.
365
00:44:39,000 --> 00:44:47,000
Where did yourself 10 years ago imagine you now and where do you imagine yourself 10 years from now.
366
00:44:47,000 --> 00:44:51,000
I don't know where I imagine myself 10 years ago.
367
00:44:51,000 --> 00:45:07,000
About 10 years ago I started to take on leadership opportunities. I was still pretty involved in research and really fascinated with kind of the science of medicine.
368
00:45:07,000 --> 00:45:17,000
I'm not so sure I plan things out I just kind of follow the interests and have had good mentorship and opportunities to do different things but I.
369
00:45:17,000 --> 00:45:30,000
I don't necessarily think I saw myself doing anything dramatically different. I thought I'd be a doctor and help to help help lead our practice and hopefully contribute to the broader conversation.
370
00:45:30,000 --> 00:45:34,000
And likewise it's, I don't know 10 years from now.
371
00:45:34,000 --> 00:45:39,000
I don't know if I have a specific destination I'm aiming for.
372
00:45:39,000 --> 00:45:45,000
I'm pretty content with what I'm doing of course help to hope to continue to grow.
373
00:45:45,000 --> 00:45:50,000
Meet you know for me at this point stage in my career I really like meeting people.
374
00:45:50,000 --> 00:45:53,000
I like collaborating.
375
00:45:53,000 --> 00:45:56,000
I like learning continually.
376
00:45:56,000 --> 00:45:59,000
I like serving the greater good.
377
00:45:59,000 --> 00:46:07,000
So hopefully doing all that just in some capacity probably a little different from now but hopefully not too different.
378
00:46:07,000 --> 00:46:21,000
So many physicians I talked to struggle with purpose we're putting this treadmill very early on, and we are said you pass this exam and this exam and this evaluation and you get into residency and you get into med school and maybe you get into fellowship passing board exams.
379
00:46:21,000 --> 00:46:27,000
Our purpose is almost defined by these hard metrics were given.
380
00:46:27,000 --> 00:46:37,000
This is a deep question Spencer. So what is how do you define purpose now what is your purpose in life.
381
00:46:37,000 --> 00:46:52,000
In my purpose in life is to be the best person I could be to positively influence others my family, friends, community those I work with and patients I serve.
382
00:46:52,000 --> 00:47:10,000
So that's the fundamental purpose is making positive contributions, not just on a large scale hopefully on a small scale, hopefully in on an everyday basis, making things a little bit better.
383
00:47:10,000 --> 00:47:21,000
I'm also, you know, growth is a is an interest of mine personal growth, learning, learning new things.
384
00:47:21,000 --> 00:47:23,000
Yeah, I think.
385
00:47:23,000 --> 00:47:33,000
Most of all I think meaningful positive contributions and I guess second would be growth.
386
00:47:33,000 --> 00:47:42,000
You know, experiencing new things learning new things. I think those would be core core areas that I emphasize.
387
00:47:42,000 --> 00:47:48,000
Last question. What's your advice to med students right now.
388
00:47:48,000 --> 00:48:05,000
Yeah, my advice to med students is just to do their best and to pace themselves, because it's a long road. Right, it takes a, it takes a long time which is in some ways a challenge but also in some ways in opportunity because you don't need to have it all figured
389
00:48:05,000 --> 00:48:08,000
out when you're in medical school.
390
00:48:08,000 --> 00:48:18,000
We train for so long that we mature as people as we're training and our ideas of who we are and what we want to do evolve.
391
00:48:18,000 --> 00:48:26,000
So I think surrounding yourself with good people, doing your best pacing yourself, being open minded.
392
00:48:26,000 --> 00:48:36,000
I think careers of medicine are wonderful. There's so many paths that physicians can take. And I think just being optimistic about the opportunities and, you know,
393
00:48:36,000 --> 00:48:46,000
making the most of the opportunities because most people don't have the opportunities that we're fortunate to have.
394
00:48:46,000 --> 00:49:01,000
Thank you Spencer. I hope we can all move towards a more compassion for ourselves. And I agree building communities to connect physicians is probably the most realistic solution to the burnout we're facing.
395
00:49:01,000 --> 00:49:08,000
Yeah.