Dec. 29, 2025

The Future of AI in Healthcare with Hugh Barrigan, Chris Page, and Aakash Saraiya

The Future of AI in Healthcare with Hugh Barrigan, Chris Page, and Aakash Saraiya
The player is loading ...
The Future of AI in Healthcare with Hugh Barrigan, Chris Page, and Aakash Saraiya

In this eye-opening episode, I sit down with Hugh Barragan, a technologist and CTO based out of Boston, Dr. Chris Page, an anesthesiologist and pain medicine physician, and Dr. Akash Soraya, an emergency medicine physician and angel investor, to tackle one of the most pressing questions in medicine today: has technology made healthcare better or worse? We dive deep into the promise and peril of AI, examining whether electronic medical records have been a net positive despite their frustrations, and what the future holds when AI can diagnose faster and more accurately than any physician. The conversation gets real as we discuss whether AI will replace doctors, who takes accountability when algorithms make mistakes, and whether patients will even care if they are talking to a human or a machine.

We explore the uncomfortable truths about healthcare costs, the gamification of billing, and why technology has not moved the needle on life expectancy despite massive investments. Hugh, Chris, and Akash do not hold back as they debate the ethics of AI-powered care, the challenge of trust in an adversarial system, and the very real possibility that AI could displace millions of healthcare workers overnight. From malpractice insurance for algorithms to the human connection patients crave, this episode wrestles with the biggest questions facing medicine as we enter the age of artificial intelligence. If you care about the future of healthcare, this is a must-listen conversation that will challenge everything you thought you knew.

Key Highlights

🏥 Technology's Mixed Impact: While EMRs have made information more accessible and eliminated handwriting issues, they have also created overwhelming data overload and shifted optimization toward billing rather than patient care.

💰 The Cost Paradox: Healthcare costs have skyrocketed since the introduction of EMRs, yet life expectancy has remained

In this eye-opening episode, I sit down with Hugh Barragan, a technologist and CTO based out of Boston, Dr. Chris Page, an anesthesiologist and pain medicine physician, and Dr. Akash Soraya, an emergency medicine physician and angel investor, to tackle one of the most pressing questions in medicine today: has technology made healthcare better or worse? We dive deep into the promise and peril of AI, examining whether electronic medical records have been a net positive despite their frustrations, and what the future holds when AI can diagnose faster and more accurately than any physician. The conversation gets real as we discuss whether AI will replace doctors, who takes accountability when algorithms make mistakes, and whether patients will even care if they are talking to a human or a machine.

 

 

Key Highlights

 

🏥 Technology's Mixed Impact: While EMRs have made information more accessible and eliminated handwriting issues, they have also created overwhelming data overload and shifted optimization toward billing rather than patient care.

 

💰 The Cost Paradox: Healthcare costs have skyrocketed since the introduction of EMRs, yet life expectancy has remained flat at roughly 80 years for 15 years - suggesting technology has not fundamentally improved population health outcomes.

 

🤖 AI Will Replace Physicians: Dr. Akash boldly predicts that AI will eventually outperform emergency physicians at diagnosis and hopes it happens soon, while acknowledging the massive workforce displacement this would cause across 15-20% of the U.S. economy.

 

⚖️ The Accountability Problem: Hugh emphasizes that computers cannot be held accountable for decisions, creating a fundamental challenge as AI becomes capable of autonomous clinical decision-making - someone must remain responsible when things go wrong.

 

🩺 Trust and Human Connection: Dr. Chris argues that patients will continue wanting human care for a long time, as the rapport and comfort provided by a real physician cannot easily be replicated - though AI may eventually convince people otherwise.

 

📊 The Revenue Cycle Machine: Only 20% of healthcare costs are actual human labor - the rest is driven by increasingly sophisticated billing optimization, which AI will likely accelerate rather than reduce, making healthcare even more expensive.

 

🛡️ Malpractice Insurance for AI: The panel predicts that by 2028-2029, insurance companies will begin underwriting malpractice coverage for fully autonomous AI clinicians, though the actuarial curves and liability frameworks remain completely undefined.

 

🥊 The Coming AI War: Hugh predicts a shadow AI war in healthcare where patient AIs fight billing AIs, insurer AIs battle provider AIs, and costs spiral as a

S00 E003.mp3




[ 00:00:00,000 ]In what it enables you to do cheaper and more efficiently, and there's what it enables you to do differently. I think the downstream effects of what we've done with technology is we've gamified billing to the point where the costs have skyrocketed, with no real effect on morbidity and mortality. And I think a lot of that, with AI, really the only way for us to get costs down at this point is for where I think the bigger risk now is that if you were to implement AI to do everything across the healthcare chain, you would be unemploying a significant amount of the population. Computers should never make a management decision because they can't be held accountable for it. And we're getting into a new era where that's going to look very tempting to let the computer make decisions.

 

[ 00:00:46,400 ]I think most people would argue there has to be a human fundamentally accountable for the things that happen. In the year 2028, we will have an insurance company that will Thanks so much for joining us, everyone. Hi, everyone. My name's Hugh Barragan. I'm a technologist and CTO based out of Boston. Chris. Hi, my name is Chris Page. I'm an anesthesiologist and pain medicine physician. I was in academics for a long time now in private practice, and I also do some medical device film into my own devices, advise some startups, and do some investor advising. Oh, gosh. Hi everyone, my name is Akash Soraya. I'm an emergency medicine physician. I'm also an angel investor. I've worked with a few venture funds as a large institutional investor and I build healthcare communities. Yeah.

 

[ 00:01:41,460 ]And I think most of you listening probably know me, but I'm a family medicine doc, investor, founder, what I'd like to be as a painter. I'll pause there. Let's get into the questions. I think a lot of physicians would say or would be unclear if technology, and maybe specifically electronic medical records, have helped us or made our lives easier versus worse. What is your sense of how has technology played into how healthcare and medicine has evolved? Has it been a net positive or a net negative? Thank you. I mean, for me, it's clearly been a net positive when you look on the total impact, which is not to say, I mean, the key word there is net.

 

[ 00:02:24,150 ]Obviously, there have been problems, especially, I think, the way that, in large number of cases, physicians have ended up beholden to their EMRs rather than feeling they're the masters of them. But that is because of the promise of the technology. There are a lot of problems with the implementation of it and how you can end up in bad places. But I think overall, if you look at what we've been able to achieve with technology, it's been a large net positive. And I'd encourage everyone to think about this in two aspects. There's technology in what it enables you to do cheaper and more efficiently, and there's what it enables you to do differently. And I think in both cases, there's been large positives on the healthcare front. And Chris, do you have any thoughts here from?

 

[ 00:03:10,620 ]a physician standpoint? And maybe let's go a little bit deeper into, is efficiency the key here when we talk about the human experience of healthcare? I mean, efficiency certainly makes a big difference in terms of the more time that I can free up, the more time that I have to spend with my patients. I mean, ideally. That's what you would want it to be, other than what sometimes happens, of just I can then take care of more patients. But like I said, ideally, you'd want to be able to spend more time with the patients that are actually sitting in front of you. But production pressure is what they are. I don't think that we've ever realized that promise in the same way that we could have.

 

[ 00:03:55,840 ]I mean, some of the things within EMR are simple. I mean, the clear things make better. Like, I don't have to sit there and try and figure out somebody else's handwriting. I don't have to be running around, you know, half the unit trying to find somebody's chart. So certainly, being able to access the information is a lot easier. I think that, um, on a day-to-day basis, I think that one of the problems is that it also makes it much easier for people that want to get more information. So administration, billing, insurance, you know, it makes it, you know, with a couple of. You know, button clicks, they can give me five more fields that they want me to fill out to collect information that may be things that they're interested in, but don't really help me take care of patients.

 

[ 00:04:35,090 ]And so. You know, you go from maybe not being able to find information to really just not being able to interpret it because you're just overwhelmed by it. I mean, honestly, my thought about this a lot, my ideal EMR, would be where there's a button that just says 'assume normal like just show me all the abnormal stuff and just strip out all this stuff that's been cut and pasted over a hundred times. And just show me that, you know, don't even show me the stuff that's relevant to me. Just show me the stuff that, you know, isn't normal. And if there's something else I want to find, I can go and look it up. So again, kind of plus-minus, sort of like you were saying.

 

[ 00:05:11,740 ]I'll just add in that a sense, a lot of the problems with the use of technology come not from the technology itself, but on what it enables you to do in terms of healthcare in this country is constrained by the business model. And technology is going to enable you to do different things that previously weren't viable. Which affect the underlying economics. And there's a strong incentive to do that. So it's unclear to me if you should be blaming the technology for that, or whether or not you're blaming the underlying system and the technology is enabling it. Both are very valid questions. I agree with you, but I think sometimes what I blame or sometimes what I feel when I'm using some of these systems is that they feel like they've been optimized for insurance companies, billers, and attorneys more than they have for me.

 

[ 00:05:57,750 ]Well, I mean, that's always a very curious one in the sense that I've done a lot of health tech implementations, finding out when you should brand and when you should just. Do what the thing wants is actually i think one of the key skills to be able to use technology effectively. Right, everyone before you has an EMi. There's ways of things you've got over the years of practice. If you try and take that and just do it with an EMR, you're going to end up in trouble because the EMR doesn't want you to do that. As a software engineer, we've made simplifying decisions on how to make this work and some of them are good, some of them are bad.

 

[ 00:06:31,350 ]A lot of the time is working with people and working out to say 'yes,' you should keep that part of your workflow. But really, if you just did 'a' before 'b' rather than 'b' before 'a,' it would be a lot easier for you. Uh, which none of which negates the point around 'yes,' it feels like you're getting a lot more data for insurance companies because that's what technology lets them get and that's what's gonna they're gonna want it. I mean, I'm just saying from our perspective. And I agree with you, but I do kind of wonder how early in the development process, like the clinicians are actually involved. And you see this a lot with a lot of technology.

 

[ 00:07:09,450 ]Is that you see solutions that are very clever engineering solutions that don't work really well for clinicians and vice versa. You see things that clinicians want that aren't really good. Of doing it from an engineering perspective. I suspect that probably the same thing happens here. I'm in the middle of implementing an EMR at the facility that I'm working at today, and I was meeting with the gentleman from the company. Asking him to do things that help with my workflow and the response that you often get, and this isn't the first time I've been through this. It's well, the computer can't do that. I'm like, it's a computer. You can make it do whatever. So yeah, I think there's fault to be laid on both sides, certainly. I think we'll probably get into a little later.

 

[ 00:07:53,860 ]One of the great promises, and we'll have to see whether or not the promise is going to be fulfilled, of AI, is that it reduces the cost to let it do things. Like you would like them to do, that it becomes less about, I've coded it, it's going to do it this one way, and more about, oh, sure, I'll let you do that. Now, early days on that. We'll see if it works. I'll take it if you can do it. So I think it's an interesting question, though, that you guys are debating where, you know, what's the end result you're measuring? Are you measuring human life expectancy? Are you measuring chronic disease management? Are you measuring, you know, throughput of a patient through the hospital system?

 

[ 00:08:31,100 ]And I think without you know, contextualizing your end goal, it's kind of a moot point. Like if you look at all cause mortality has not changed in 15 years, right? Since EMRs have come around and become mainstream. American life expectancy is still exactly the same, roughly 80 years. There has been no drastic jump where you're like, 'Oh, wow, we live in 50 more years.' Yeah, I'll click through any button you want me to on Epic because I doubled my life expectancy. Expectation, excuse me. Um, but if you're like, 'Hey, I can now optimize billing a little bit better.' I am less, you know, burdened with whatever note you've read and I can't decipher it. Yeah. And that term in that kind of aspect of it, sure.

 

[ 00:09:15,890 ]It's made my life better, but has it really helped patients at scale? I would. That's the piece I would question. Do you think that's more of an indication of that healthcare in general having such a small impact on our healthspan, lifespan, morbidity, and mortality? And so much of that is determined by everything that happens outside the hospital and outside the physician's office. Or do you think it's actually just not made a difference in the past 20 years? Do you think everything else has gotten so much worse, right? Like with fast food and income inequality, that— maybe technology within healthcare— or healthcare— has helped a little bit, but is this not enough to offset everything else that's going on?

 

[ 00:09:57,720 ]I think that was my larger point where I think it's really kind of blazed to say that, oh, it's just the technology that's the core impact driver. And in this case, for a larger company. conversation. Just AI, that's the larger impact driver of patient health, morbidity, mortality, long-term effects of medication, long-term effects of the healthcare system. I think I think it's too naive to say it's just what this one aspect. And I think that should be the end goal, right? If we really think about it, if we're all within the health. Ecosphere, the goal should be: 'Hey, I want to live to X number of years. The population should live to X number of years with the least number of healthcare interactions. You know, that's not what the current system is designed for— slash, you know, optimized for.

 

[ 00:10:43,850 ]And so, you know, even from a larger AI perspective, I I'm hesitant to say that AI will really change this dynamic for a long term. Because if you don't change anything else, it's a moot point that you've diagnosed this condition three weeks earlier than it should have been done when the rest of the system isn't designed to optimize for that early diagnostic. I mean, and the other variable is cost, right? Obviously, life expectancy has not gone up, but costs have gone up. Substantially, in spite of the technology. And I think we'd all feel a lot more comfortable if healthcare was the same cost it was in 2000 and we had the same outcomes. It's the fact we're paying multiple times more for the same outcomes, which I think people view as the problem.

 

[ 00:11:27,080 ]And do you think that's because we're essentially billing smarter? Because now it's like I have all these RCM drivers that are, for lack of abbreviation, revenue cycle management, right? It's like, oh, you blinked twice so we can, you know, code for the fact that you have chronic blinking or whatever nonsense RCM code there is. And so like when you get a bill at the end of the day, it's like you have 50 diagnostics and you're like, why is this a specific diagnostic code? Why is chronic hypertension a different code than hypertension? which is a different code than, oh, my blood pressure was a little bit high today when I came to the emergency department. So now it's hypertensive urgency. Like this is a bunch of gaming the system so we can optimize billing.

 

[ 00:12:08,120 ]At the end of the day, patient pays for it, but it's the EMR and the technology that kind of got us there. Like 30 years ago, and again, I wasn't practicing. But if I talked to a colleague that was 30 years ago, they wouldn't say, 'Oh, we build optimally.' They'd say, 'No, we cared for the patient.' Hypertension is their issue. Let's bill for hypertension. I think the downstream effects of what we've done with technology is we've gamified billing to the point where the costs have skyrocketed with no real effect on morbidity and mortality. Yes, sorry, I was just wondering, because I don't know if Chris was practicing 30 years ago. Maybe you were. 2021. I was building chips. How has billing changed since you started?

 

[ 00:12:49,160 ]I mean, I'll be honest on that one. Like, I've never submitted a bill personally in my life. Like, as an anesthesiologist, it's just not. It's something that you're sort of shielded from. I just, I, you know, agree with what Akash is saying, but I also think that, maybe, I'm a little older than you guys, so I'm a little bit more cynical at this point. That we've probably gotten better at the financial optimization of healthcare. Drugs are more expensive. I mean, drugs that I used to pay 30 cents a vial for, $200 all of a sudden. I mean, I think in the same way that there's more ways for clinicians to game the system. And my favorite— you know— moniker for that is always a wallet biopsy, which I'm sure Akashi probably heard in the past.

 

[ 00:13:36,570 ]You know, we do it. The pharmaceutical companies do it. You know, everybody is sort of like trying to get their piece. To simplify this, because it's so complicated and this is just my opinion, there's two major problems with the American healthcare system. One is that nobody trusts anyone anymore. Like my patients don't trust me. I don't trust my patients all the way. Nobody trusts the insurance companies. Nobody knows what the device company, you know, and the pharmaceutical companies are doing. So that's one thing. And I think the other problem is: There's too many people making money off of healthcare that don't provide any actual care. And a lot of the issues that we're talking about, going to talk about that, I think are problems can kind of go into one of those two buckets.

 

[ 00:14:20,400 ]But I mean, that might be an oversimplification, but it helps me kind of classify the things that I see that at least I think are wrong with the system. I mean, healthcare is one of the classic examples of Goodhart's law, which if you're not familiar with it, is that any measure. becomes a target, it ceases to become a good measure. And we've spent so much time trying to measure outcomes, and then pay for outcomes, and that causes you to do weird things to get those outcomes. And unfortunately, it's very difficult to find a way truly around that. We want to be able to have a network that delivers good quality care, but we can't even agree on what good quality care means.

 

[ 00:14:57,070 ]What the definition is, and anything we do— give there's so much money on the line at this point that people are going to find ways to worm around it and find the extra dollars. Yeah, I hear the version of that. It's similar, but different. That I like is the, the McNamara fallacy. We measure what's easy rather than what's important, you know? And I think, I think. We do that— on a on a clinician level—quite a lot. Like, there's it's easier to grab numbers than it you know. I think everything that happened with the pain is a fifth vital sign at least in the U. S. was a really good example of that. Like, pain is a complex human experience.

 

[ 00:15:32,670 ]We wanted to make it easy to evaluate and we wanted to you know make it easy to document and track over time. And we chose something that was an atrocious, you know, unidimensional measure of that. And, you know, we've all seen the consequences of that over the last couple of decades. I mean, one of the things I am hopeful for on the AI side, and I think one of the other reasons that healthcare in the US has had. An increasing cost to it is Baumol's cost disease, which is that everything else in the economy is getting more productive, but a lot of medical work is still the same thing it used to be right it's your patient time it's the patient's contact time with you and your rapport with the patient and that has just been going up in cost because you know all the other sectors of the economy are getting more expensive and you have to match that.

 

[ 00:16:19,700 ]And I think a lot of that with AI. Really, the only way for us to get costs down at this point is for doctors to do fewer things and have like contact time. It's the really expensive bit, and unfortunately, it's what people really value too. There's going to have to be a trade-off between getting what you want from an AI cheaply and effectively and talking to a doctor, which is going to be expensive. And I don't know— it's going to be an adjustment for people, I suspect. Do you think there's a difference though? Cause there's two sides to things. I mean, there's the. There's the interactional part of it, if you want to call that, and then there's the procedural part. And what we spend a lot of money in this country on is doing procedures.

 

[ 00:17:00,010 ]I mean, that's the specialists that make a lot of money. or the people that are just doing stuff to people rather than, you know, spending time with them, diagnosing, talking to them. So, you know, I think that, you know, you need to maybe look and think that the solutions to those things are a little bit different because, especially on the procedural side, a lot of the cost ends up being. the equipment, you know, the charges that are, you know, and the things that are being sold to the. physicians to do the procedures. So it's just it's two different worlds, I think. I mean, in theory, if you believe in the total market economy for this, then people like Kaiser and Optum doing a lot of its in-network stuff are going to be taking an increasing part of that because they take that all away and do the full capitation model.

 

[ 00:17:47,250 ]I mean, Optum is making large strides, but it's still most people are under a fee for service arrangement. It'll be interesting to see. How? New technologies continue to push that. But obviously, a lot of people were not happy with Kaiser either. That last piece of, you know, is it the physician time or the nursing time or the human in the touch time that's the most expensive? I think, if you look at this cost of labor in healthcare, I think I'd be surprised. And I think the last time I checked, it was like roughly 20 percent of all healthcare costs is actual human labor. And this includes nursing, physicians, physical therapists, everyone in the system. So the rest of the 80% is this cost inflation that I believe happens because we are tracking in general more.

 

[ 00:18:34,680 ]And if AI is the optimal tracker of anything, like this is a revenue cycle company. These golden goose, like this prints forever. You can use a company like Open Evidence or Vera Health or any of these diagnostic clinical decisions. Support companies plug in your chief complaint, you get 40 whatever you know diagnostic probabilities of what this could be, and then you also get these like 100,000 ICD-10 codes that you can easily bill for. And so, like, even if you factor out that 20% of clinician, nursing, whomever cost, I'd be surprised if AI truly decreases the cost of care. It just makes us better revenue cycle machines as a healthcare entity. Could you expand on that, Akash? You said so much money is spent on tracking different numbers in healthcare.

 

[ 00:19:27,630 ]I'm assuming some of those like admission rates and things like that. So why isn't AI taking those quote-unquote jobs? Is it because the people making the decisions on AI adoption are the ones doing those jobs? Or is it a different driver? And do you see? Is there room for, then, startups to focus on those jobs instead? Because I'm assuming there is some healthcare system that's innovative enough. Maybe it's summer health with General Catalyst or a VC-led one. Go deeper there as to why isn't AI taking those jobs? I think the way the system is designed now, healthcare is such a large portion of the U.S. Economy, where I think the bigger risk now is that if you were to implement AI to do everything across the healthcare chain, you are unemploying a significant amount of the population.

 

[ 00:20:23,690 ]Right. And I think the public fallout of just like 15 to 20% of the US population now being unemployed overnight is staggering. How do you retool? All these individuals to be, you know, valuable individuals in society— it's a large, um, drain of, um, cognitive brainpower as well. These people are extremely intelligent across the spectrum from physicians, nurses, physical therapy, you know, name the profession in healthcare, your average individual. Is probably going to be smarter than most people in the world. And so what do these people do? And I think right now it's easy, you know, brownie points to say, hey, we'll diagnose quick. will get you care quicker just because it sounds sexy. It allows the population to get this fantastical view of this optimal health care system.

 

[ 00:21:14,630 ]Practically speaking, you are handicapping a huge portion of the country overnight if that becomes the reality. And to your point of like that 80 percent, like what's happening there at the end of the day, those are the decision makers that who end up making a decision of do I want to make myself obsolete? Right. It's like. It's the administrator in the back office who controls spending for procedures, controls spending for departments, controls, you know, what we bill for. What gurneys we buy, all those decisions. If you're overnight going to say, 'Hey, the AI can do it better than you.' Have a nice day. You know, here's your two week. I feel like they'd say, 'No, thanks.' Why are we making copilots and not replacements? I think that's why.

 

[ 00:21:57,800 ]Copilots are easier to digest. Replacements are— you're saying, 'I can't feed my children. I can't pay my mortgage. I can't. Live in this world.' And I think that's that same kind of conversation. I think every industry is having, where I would prefer a co-pilot to a replacement because I have no other way of making an income. I mean, I'll just add too: The issue is wider than that in that if you look at things like, you know, and how good AI can be at looking at that. I would expect the top doctors to be better or at least have a different perspective on that, but it's going to become quickly. Not really cost feasible to have a junior doctor look at it, right? Who's going to pay the cost for them to do that?

 

[ 00:22:40,630 ]And what I'm worried that will happen is that there's going to be no apprentice ladder to get to the point where you're better than the AI. And what's going to happen is no one will ever invest in a junior doctor. They will instead just farm it off to the AI. Farm and that is going to stop the people who would be pushing these algorithms forward from getting there. I don't know how we solve that. Someone has to invest in the future in a way that doesn't make rational sense. Which is really tough. Because, yeah, when you know, like, the average user is only on an insurance plan for three years. The incentives to actually invest long-term and do these things become really tough. There's also a conversation of cost to compute, right?

 

[ 00:23:21,070 ]Right now, cost to compute. I think, if you, I mean, I don't have data on this, but if I were to ask a German or Google or an open AI, are you taking an L or are you essentially paying for most of this cost of compute to get user traction? I would argue they probably are. But once you hit scale and frequency of AI utilization, I feel like every token it will be more expensive than it currently is. Like, unless energy costs become exponentially cheaper, which I hope they do. But as of right now, I imagine it's very, very expensive to host most of these LLMs to even allow you to do any of these analytics. I mean, it depends on what quality you're going for.

 

[ 00:24:00,710 ]I think that's going to be a question that becomes more front and center. It's already possible to run local LLMs, which are pretty good. Very cheaply on your local machine, which is very different to getting GPT-5 running, which is expensive. And healthcare is not a division of labor. Where we are quite happy to expect a half-assed job and to see, like, oh, we'll see what the cheap LLM says. And that's just going to push the costs up further in healthcare, probably. So the conversation that becomes, is the quick reality two branches of healthcare? Like one that is more expensive, AI driven, and you're paying top dollar for the greatest AI system to make sure your diagnostics are accurate. And the second, the human version, which is just as good.

 

[ 00:24:49,590 ]Not up to date, maybe, you know, a little bit less diagnostically, you know, fantastical, but still gets the job done. But, you know, a standard deviation cheaper. Is that a reality? Well, but I think what most people would probably want, at least now, and maybe this will change in a generation. Is they would want a hybrid of those two things, like I don't want to walk into a doctor's office and just like talk to a computer like people want to be cared for, like that's the care part of healthcare. And so, you know, to circle back what you were saying about getting rid of a bunch of people's jobs overnight, I think if for no other reason, we're a long way from that, because it's, you know, it would be an uproar just from the people that are being cared for.

 

[ 00:25:33,070 ]I mean, they're going to want, and there's a lot of those jobs. You know, that can't be, you know, can't be done by an AI now. I mean, my job certainly can't be done by an AI, you know, maybe sooner or later people end up, you know, getting the kinks out of the intubating robots and ultrasound doing robots. And it's going to be a while, you know, certainly before that happens, but. I think it's going to be a long time before a robot rolls up to a patient in a holding area and says, 'I'm going to be doing your anesthesia,' and a patient's actually going to feel comfortable with that versus me walking up to them. And a big part of what I do, yeah.

 

[ 00:26:08,780 ]I can give people drugs, knock them out, they don't know what's going on, but You know, people think anesthesiologists don't. develop rapport with patients but actually i have a really you know it's really challenging for me to do that i have five minutes to develop rapport with somebody so i have to go in read them really quickly figure out what's going on with, you know, Taylor, what I'm going to say to them to try and, and that's important because that's what, that's the part that people remember, you know, they'll remember before and after. And if you want to look at it, not just in terms of like straight up outcomes in terms of like people's experience, you know, which they're going to value.

 

[ 00:26:43,850 ]that that's going to be part of it for for at least a long time at least until people You know, we pass on to people that, you know, have nothing else to, you know, haven't had that experience. But for a long time, you got a lot of people walking the earth that are going to want to be taken care of by humans to a certain extent. So I don't think it's quite as quick a transition. As Akashi were kind of, I mean, I understand you were doing it for sort of dramatic effect, but it's going to at least take some time for that to happen. Rashad, what do you think? You know, I kind of like what you said, Akash, and I'll kind of read it out for our audience.

 

[ 00:27:19,520 ]As much as I love my role as an emergency physician, the harsh reality is that I'm never going to be as good. As an AI-based intelligence. I believe that AI will replace my role and I hope it happens soon. I think. We are over-indexing for this concept of human connection or human experience in terms of we think it's undefinable. We think that it cannot be broken down by AI into, you know, a billion data points. And this kind of goes back to what it means to be human. Like AI has AI can fool me that it's human even now, right? In my car, the Toast as a Grog AI is there. Um, And you know, sometimes I forget I'm talking to an AI. Thank you. Right.

 

[ 00:28:07,500 ]So I think even in its current state, it's—it has the human element. Um. And maybe that's because it's trained on human data and it's trained on human experience to an extent. I don't know if AI knows what it means to be human, but I think AI can— pull off being human even now. And it's just a matter of putting it into physical form, right? To replace an anesthesiologist, you know, Chris, I'm assuming what you mean is the manual dexterity required, right? Just the technical, you know, the technical parts of it. I mean, there still has to be things. I mean, Akash, I'd sort of ask you, I mean, to a certain extent, I don't know exactly what, you know, from the ER docs that I know, I have to assume that to a certain extent, you're a proceduralist, right?

 

[ 00:28:53,090 ]You do a lot of procedures. You know, you do a lot of things. You do a lot of implementations. I think, you know, Rashad, we had kind of gone back and forth about this a little bit a couple of days ago, is this kind of like. You know, this centaur model is maybe at best, you know, the computer is helping advise me on what to do in a situation, but. And, you know, even telling me what procedure is best to do. But, you know, for better or worse, a lot of health care at this point is doing procedures on people. And that's certainly where a lot of on the on the clinician side, the money is made. And so. I think that's just a matter of time, Chris.

 

[ 00:29:32,960 ]I think it's just as technology evolves. Do you think AI has passed it? Lots of things are a matter of time, but that's— It's further off, so I think we have a little bit more time to adapt to that if we start answering those questions now. You know, we're kind of in the midst of the AI, you know, the knowledge, you know, the knowledge conflict going on. The other part of this is the regulatory question, too, in that. You know. There are plenty of other industries which are going to go through this transition and someone can just break the rules a little bit, or maybe there aren't even any rules and AI comes and does new things. Healthcare isn't that industry.

 

[ 00:30:08,760 ]There are going to be weird things where you're going to be the right or the wrong side of the FDA device legislation. That means you can do certain things and you can't do other things. And that's going to cause where we use it to be. not fully market-driven in a strange way. The extra legislation is going to make a difference on where it's used. And I think, before we talk about regulation, maybe let's talk about who makes more mistakes, right? And I think AI scribes are a good example. Here, where we know AI scribes hallucinate. And this may be a surprise to you, Hugh, and the public. Physician notes before AI scribes were not accurate. What? When I worked as a hospitalist, the amount of times I would see Uh.

 

[ 00:30:56,580 ]So one of our parts of our exams is we look at extraocular muscles and pupil reflexes. The acronym is EO-MI-PERLA. I would see that in every note. I know for a fact that most of my colleagues did not do that. Right. And for lack of a better word, there was information in physician notes that just was 'quote-unquote' a lie. Uh, So I guess a question for Akash and Chris: I don't think you use AI scribes as much as who do you think is more accurate in depicting what actually happened in the patient encounter? Is it current AI scribes or previous notes by physicians? But both systems have the physician still doing the exam, right? So then it's just, it's a conversation about how lazy versus.

 

[ 00:31:46,580 ]How, you know, how much impetus do I have to correct this one thing? Like, and I'll use your example of, you know, extraocular muscles intact. It's one of those things where, and I think medical training has changed away from this, and I think billing has also changed away from this, where pertinent physical exam is more valuable than a comprehensive physical exam. And I think when you go through training, you're trained to do a comprehensive exam because you want to go through the road basics of this is how it works. This is the job you will do. But like, for example, now I do a relevant physical exam. So if you, you know, broke your arm. Not doing an ocular exam.

 

[ 00:32:29,670 ]It's a, there's no one, why take the liability of saying something that I didn't check and then it's irrelevant to your case. And then two, it wastes time. I have templates for body parts or injuries where I'm like, 'Oh, this makes sense.' Why am I doing this? This test. And so I think, you know, AI describing in general has probably changed it for more accuracy in terms of less likelihood of it being wrong. But I think the overall zeitgeist of health care has changed to: 'Why are we doing comprehensive exams on everyone? It's a waste of time unless you're there for a wellness check, you know, annual annual visit. Sure. But if you're not there for that complaint, I mean, this is my job where. You're in the emergency department.

 

[ 00:33:12,000 ]I can't speak to every specialty. But if you're not there for that complaint, it seems kind of a waste of time. Yeah. I mean, I don't, I don't use ascribe— I think in the way that you're describing per se, but like, you know, I have kind of my own version of it, which is kind of the anesthesia, you know, automatic automated anesthesia record keeping in some of the EMR systems. And I mean, that was certainly a big change for us when we switched over to it, because it certainly makes. The gross recording of like vital signs and and you know the ventilator settings and a lot of the the stuff that we were just kind of writing in by hand— more accurate, but you know. It doesn't contextualize it at all.

 

[ 00:33:54,140 ]It's just a bunch of numbers. And so if you see things that look very bad on it, you also kind of need to know why. Like, is it an artifact? Like, is it, you know, something that was very transient, but you're only measuring? You know, you're only recording vital signs every minute. So you miss something in there. So you're getting a lot more information, but you still need, you know, kind of, you need that context of what's going on with the patient. You know, if you talk about a system that kind of is feedback in the debt to fix that numbers, you know, fix those numbers, you start running into the old problem of we treat patients on numbers. And like, sometimes that has real consequences. And like, there may be things that.

 

[ 00:34:36,139 ]I know from experience that I do or don't want to do that unless you have a very carefully trained system, it's going to make a mistake because it starts to over-treat something. So, yeah, I mean, that's my that's my that's my scribe. But I have no doubt somebody's done this, although I never see it as probably if you compare anesthesia. Records from you know pre-electronic on paper to anesthesia records that are electronic. There's a lot fewer examples of like train tracks where it's just like the same vital signs. Right, so yeah, I mean those numbers are more accurate, but they still have to be contextualized. Accuracy is important, but I don't think we should over-index on accuracy in the sense that there are always going to be errors.

 

[ 00:35:16,470 ]And I think what we're really dealing with in a sense is the the accountability of the whole problem. And, you know, there's the IBM quote from the 70s, something like, 'Computers should never make a management decision because they can't be held accountable for it.' And we're getting into a new era where that's going to look very tempting to let the computer make decisions. Fundamentally, I think most people would argue there has to be a human fundamentally accountable for the things that happen. That's going to cause some weirdness. I mean, if you look at a lot of the self-driving car stats, it's very clear that self-driving cars are much safer on average, especially in the kind of places that they've been testing it, than humans.

 

[ 00:35:58,940 ]But it goes back to the who's accountable for it and how do we solve the accountability problem. When it comes down to the two of you using a scribe or something, ultimately it's going to have to be your name on it, vouching for the information. Thank you. How do you feel about that? Well, it is now. I mean, it is now. It is now, yeah. I'm the one that's responsible for, you know, if I think there's an error annotating it, correcting it, but all that information is still captured somewhere and I can still have to go back and, you know, explain why I changed something, explain what I was thinking. And it gets very laborious to try and fight the computer to make sure that it understands, and the record can make sure that it understands.

 

[ 00:36:40,970 ]Understands like the subtleties and the nuance of what it is that I was trying to do. But I mean, this comes, you know, I think this, what you're sort of getting at is the kind of litigation part, which gets you know very sticky in terms of like who's responsible— so I mean litigation, but also I mean just taking an example from the software engineering world: what we're seeing is. You start using AI to code a little bit and it makes things a bit easier, but fundamentally you're still doing the code. And then you find yourself three weeks later just saying, 'Make this feature for me.' and pushing it out without fully checking it. And it's got a problem with it. And so we're having on the software side to deal with the, well, is it your code?

 

[ 00:37:19,860 ]And you're like, well, I didn't really write it, but I guess I'm gonna have to own it. And that's uh, you know, it's all—well, owning your own code when you write all of it. The temptation comes to push more and more of it onto an AI or subordinate to whoever dealing with that. But I guess, I mean, there's a whole structure for this already in the medical world with you providing supervision to people of lesser license levels. So maybe it just fits into that. It does, but it's also. Again, it's going to be a generational change. I grew up in an era when we didn't have this stuff. I'm functioning in an era when, when I have access to it, but I don't totally rely on it because I have my own expertise.

 

[ 00:37:59,970 ]And so I'm at least able to question it, you know, in a way that somebody that is natively growing up being trained using systems like this is not necessarily going to be able to so you know, we better hope that these systems get faster, you know, better a lot faster, so that they can beat that curve of us, maybe not doing as good a job of training people as we used to, or at least, if you had a, if you had a company come to you and say, you know, we have a solution for whatever anesthesia dosing or something, and we'll dose it for you, and we will take accountability, would you use it today? You'd have to do something, or give me a chance to learn to trust it enough.

 

[ 00:38:41,080 ]You know, I mean, it's kind of. It's not just the ability piece. For me, it would be kind of like training a resident. You know, when I get a new resident walking through the door, I don't trust them at all. You know, they could be a very nice person. They could be very, I don't care what their reputation is. It's I'm beholden, you know, it's my responsibility. It's ultimately anything they're doing is ultimately my responsibility. And in the beginning, you know, we watch them very closely as they age and they go through their training. You get a sense of how good they are. You get a sense of what they're capable of. And you start to give them a little bit more rope and a little bit more rope.

 

[ 00:39:19,150 ]And eventually, they, you know, kind of fly free. So, I mean, I think if you're offering me that kind of system, I'd almost have to do the same kind of thing— it would take a long time for me to. you know compare my judgment about what the right thing to do is in the situation independent of what the that software is telling me to do for me to really trust it to make decisions for me It's not something I would be able to be comfortable with overnight. I'd have to use it for a long time. And I'd have to use it in a lot of different situations because edge cases come up not infrequently, you know, and that's when people get hurt because you fall off the algorithm and suddenly if you don't have experience, you don't know what to do.

 

[ 00:40:02,120 ]I can walk into a room with a situation I've never seen before. I've seen enough stuff that I can cobble together parts of situations I have seen before and probably figure out what's going on and what the right thing to do about it is. I don't know that an AI system now could do that. I mean, maybe, but. Um, but, you know, based on some of the errors that I see it make and information that I ask it for, I'd be very skeptical that it's, you know, it's up to that point yet. But it seems like this is interesting because it seems like the liability piece isn't really a barrier. It's a trust piece. And Akash, I would be curious to hear your thoughts. The residency period is anywhere from three to eight years, right?

 

[ 00:40:43,060 ]So would you want to see the AI perform? At an attending level for three to eight years, depending on your specialty. Or is it a more shortened period? And if so, does that mean, like the AI you know, is screened with multiple physicians at the same time across multiple hospitals? And so it kind of gets to see everything you would see in a whole residency training period. That's the important point. It's not— it's four years because it probably takes that long to see enough patients to take, you know, take care of enough people to see. enough variation to, to build that experience. And he's still, or it's four years because hospitals and programs get free labor for four years. Yeah. Yeah. Wow. I got a buck. Like for example, right, right now.

 

[ 00:41:32,610 ]If I'm solo coverage in any emergency department, I usually have maybe two or three PAs or NPs with me. And these are seasoned individuals, like they've been doing this for 15, 20, sometimes 30 years. And they still run cases by me. Like, technically speaking, they have practiced longer than I practiced. And they're still running every case that's even slightly complicated or a slight disposition that's difficult or any, I mean, definitely any procedure. I still get that case run by me or I will go and see the patient or I will at least look at the imaging, whatever the situation is. But I feel like for me, it is a little bit about the liability. It is a little bit about like, hey, if these individuals were by themselves or if the AI is by itself, okay, it's on you.

 

[ 00:42:25,110 ]You can make the decision. If you are confident enough, go for it. And I think the second piece is. I think the perfect world is we have, you're at, you know, perfect world hospital where you have every specialty under the sun, you have every resource available. That's not the majority of the country. The majority of the country is like at a level three, maybe level four trauma center where you have like three or four specialties, maybe in-house or even consultable, and everything else. Outpatient. Like, for example, how many ophthalmologists go to hospital? Zero. Like, unless you're a training facility, no ophthalmologist goes to hospital systems, no ENT. Rarely will go to a hospital unless they're doing procedures, unless they need OR time. They are not just doing consults for nothing because they don't get paid to do it.

 

[ 00:43:13,260 ]And so now is the AI system going to say: 'Hey, console ENT, if the ENT is not available, is the AI going to say auto admit? Like, because it doesn't think that the risk threshold is low enough to discharge this patient or whatever the situation is.' And so I think I think we are optimizing AI programs or AI software for the massive academic institutions that have, you know, world-renowned specialists there to do every procedure that they need. The real that honest majority is that—this is not what the country is and that's not what most patients you know most ecosystems are designed for. It's designed for, hey, can I piece this together, stabilize this patient enough so they can see someone else in the future?

 

[ 00:43:56,970 ]And I will take that liability because of the rapport that I have built and the ability for the patient to call me, call the hospital, call any safety net available so they can get there. Yeah, sorry, Chris, I'll use, you know, where we practice as a good example, we kind of practice in the same, you know, geographic area where there's one, maybe two massive trauma centers that you can really send a patient to. You work at any of the hospitals around there, you got to fly these people out. Is the AI going to fly every single person out for every injury? What's the plan here? Yeah, I mean, but that's, I think, going to be probably the problem with a lot of this is a lot of medicine is a gray area.

 

[ 00:44:35,140 ]And so who's going to, who's going to make, yeah. Okay. Who's no, you're absolutely right. Who's going to make the decision about where to draw that line, where to make that threshold? You know, I think you're exactly right. I mean, that's going to be the problem. It's like. Somebody's going to have to tell it. What's the cutoff for making that decision? And who wants to take responsibility? It's not going to be 100%. And so where are you comfortable with, is 98% good? That's going to be a tough one. I don't want to be the one in that position to make that decision. And on the mental health side, access to mental health is still extremely challenging. And it's very unclear to me the ethics of. Providing what could I mean?

 

[ 00:45:18,980 ]I'm not without even making a decision. If an AI chatbot therapist is getting on par with some therapy human therapists, etc., etc. Leaving that hole— let's just it's worse, for the sake of argument. But it's available and it's available now. Ethically, is it better for them to have access to something that kind of works when they can't get access to something that does work, and that also leaves aside the efficacy of therapy as a general concept too? But you know, leaving that aside, but I mean, just from a liability point of view, it's a lot harder to be held liable for kind of not being able to find access to a therapist than providing a chatbot that doesn't do something quite right in one case out. Of, you know, a million, 10,000.

 

[ 00:45:57,460 ]And so it's going to come back to liability on this, even if we're trying to do the right thing and increase access the ways that we can. What are your thoughts on, or Akash, maybe what are your thoughts on OpenAI's case? Well, sir, Richard, I want to ask you, do you think that you can repurpose Medicare, Medicaid to be this large AI sovereign fund? Like that's your. Uh, essentially, insurance funds like instead of paying using this as like a like, every individual after the age of 65, instead of getting Medicare, where you can, or excuse me, Medicaid or any of these systems where you can go ahead and use it in any hospital system, clinic, whatever, this now becomes an insurance fund.

 

[ 00:46:40,030 ]And you say, 'Hey, if the AI misdiagnoses you, this is what you get paid out of.' And instead of the hospital system getting paid to buy these large caches of cash, it's more of a. In case things go wrong, here's the government backstop that you can cash into. And then the true way for hospital systems to make money is to say, 'Hey, procedures are still, you know, private insurance and there's a small Medicaid, Medicare, you know, allocation for procedure reimbursement.' I think patients sue because they don't like their clinician. They don't sue for outcomes unless there's, you know, in the States it's a little bit different. Um, But I've made mistakes where, if patients did sue me, they would win. And they haven't. Right.

 

[ 00:47:34,320 ]And told them as such that this is my mistake. And the reason I say that is when we talk about an insurance Underwriting for AI liability. I think they will underwrite it as much higher than physician liability. So I think malpractice insurance for an AI is going to be considerably higher than malpractice insurance for a physician. And the reason there being is. Patients are much more likely to sue, and I think Chris, we talked about this before, a nameless entity. Because they don't foresee the harm being done. Um, to that individual, right and worse than even an aimless entity, a big company with really deep pockets. Yeah, or the Medicare insurance fund. Yeah, because who ends up taking the hit in a lawsuit? It's the hospital and the physician, because we have the biggest.

 

[ 00:48:28,790 ]You know, we have to have the largest like cap on our insurance so um, you know, take away the the human element. You know, where there is a certain amount of non-union between when malpractice actually happens and when people actually get sued for it. And the third variable in that is how well people feel like they were cared for. You know, carve out a lot of the times when people in theory could be sued because there was actual malpractice that happened when it's suddenly just like you said. Some faceless You know, the people are just going to haul off. Certainly, you know, the malpractice attorneys are just going to haul off into a big company with a lot of money because they're.

 

[ 00:49:14,710 ]You know, unless there's some legal regulatory, you know, stops put on it like there is in some states, you know, some limit on liability. I mean. I expect, if you communicate mostly non-virtually, then the AI will be very good at sucking up to you. You feel like there's a connection? Do you think that it will be ethically required to disclose whether or not it's a real person through the other screen of the text or an AI? And will that affect people's expectations of care and how they feel cared for? It will definitely, I think. Actually, I don't know. It's an interesting question. I don't think it will move the needle, right? Even if people know it's an AI. And we kind of see this already, right? When GPT updates it, right?

 

[ 00:50:07,730 ]People wanted the LGBT back. Because they had a relationship with it. Um, and we're already seeing people make relationships with AI. solutions. I don't think it matters if they know. if it's an AI versus human. From an ethical perspective, and I think the underlying question there is, what are AI rights, right? Is AI discrimination okay? Or should AI have similar rights to humans in certain aspects? You know, I think the obvious answer is no, because it's not a living entity. Um, but AI is more human-like than animals, than dogs, I would argue, or than pets. Right, and that's because you can talk to an AI, it can talk to you back in English, and it's trained on our data.

 

[ 00:50:55,700 ]And I'm more thinking less on the 'what are AI's right side,' but more on the 'you're in your MyChart app and you send a message saying, 'Hey, I'm feeling a bit under the weather' and you could get. response from the AI saying, 'Hello, my name is John, I'm an AI bot, and I'm here to help you through your problem.' Or it could just say, 'Hi, John here at the practice, how are you feeling?' And you could have an entire conversation. It could be unclear to you whether or not you're talking to a real person, drafted by AI, fully autonomous AI. Drafted by AI and signed off on by a human or a real human. Do you think that that will affect the standard of care?

 

[ 00:51:32,820 ]Do you think that there are obligations around telegraphing where the AI involvement in the process was? I think we should be upfront and transparent when we're using AI. I don't think it'll affect care. And I think the message John, the AI Bot, will not say, 'Hello, I am John the AI Bot.' And we'll say the other message you said, right? We'll just say, 'You know, hey there, I'm John, I'm an AI bot, and I'll be helping you today or something.' It'll send a human message. Do you think it makes a difference, not to get stuck deep into this rabbit hole, whether or not John is a real person at the practice? Is there a difference between it saying, 'like effectively I'm a made-up person' or if it said, 'hello, it's Rishabh here.' Yeah, no, no, right.

 

[ 00:52:14,685 ]Because the— the— The key is the patient experience, right? The key is the patient or the individual receiving the care they need, right? Both from a medical perspective, but also from an experience validation perspective. And if they're receiving the— say, the sticking to mental health. Say John, the AI counselor, actually improves your mental health score, your depression score, your PHQ-9, which is the metric we use to measure depression. Does it matter it's AI? Right. Your depression is getting better. You're happier. Right, so I think the end result is what matters, right? You know, there is the aspect of quality improvement, this is saying that we should focus on the process, not the outcome. Because if you present the outcome, people will shortcut the process. Right.

 

[ 00:53:05,509 ]But this is one of the scenarios where we cannot know the process, AI being a black box to an extent. Um, Yeah, so I don't think it matters. And to kind of build on this point, I can't share idea of a Medicare insurance fund. Which has, you know, this is how much we pay out for this mistake. Right. You know, insurance should be tied to the mistake that's being made. Note to the end result. This is probably a very unpopular point. Because the end result can be drastically different based on a variety of factors that have nothing to do with the mistake. Let me ask you, you know, A question, Akash, and everyone, feel free to chime in.

 

[ 00:53:47,240 ]Like as an investor, if a startup came to you and said, 'I will take on liability for what I'm doing,' or if Mira tomorrow started doing this, you. Is that a competitive advantage or are you taking on too much risk? I mean, just going back to what I said at the start here, understanding what you're taking the liability for is an important part. In the sense that everyone's going to use AI for back office processes that you can't see. It's going to be invisible. No one's taking any liability for that. I don't think anyone's taking liability for, uh, you know, like, bet interoperability that was written by an AI between healthcare systems, right? That's just going to be what it is. It comes out to the parts where it's actually kind of judgment.

 

[ 00:54:31,670 ]The liability comes with the judgment and so, um, I don't think most of the people I'm talking to are ready for that level of like, the actual accountability chain to work it out. I'm not sure they're ready for that at all yet. I don't know. I'll pass it over to Kasia. So I actually don't think it could be a startup that could do it. I think the pockets have to be large enough where it has to be an incumbent. Like UnitedHealth is saying, hey, we're going to have a startup that, in-house, that we're going to build on our own, but they have our cash stack. Premiums and our, you know, backing to do it, I think there is a greater likelihood of success.

 

[ 00:55:16,110 ]And if I had the opportunity, I would rather invest in that than a company out of Y Combinator. And just because the amount of cash required just to have on hand at all times is a staggering amount. Like, it would be a non-trivial number. especially for healthcare. Like, I think I'd be less like, and I, I think we talked about this separately, this company called Lemonade that does this for the, uh, retail investor market where it says, 'Hey, we'll do AI insurance for cars, homes, etc.' And it's funny how they have not touched healthcare. And I think it's kind of what you said, where the final outcome could be someone dying from a mistake. As, hey, we didn't give them, you know, a little bit of potassium and they could have needed it, or we gave them too much potassium, or whatever it may be.

 

[ 00:56:09,010 ]And so the downstream effect is catastrophic, but the actual mistake was questionably, you know, uh, maybe not as life-threatening in the moment. And then how you separate that. I think it's really tough, like versus the camera not working on a car. I think that you can actually pinpoint and say, this is the reason the car crashed. Your right, camera didn't work. And so I would be extremely, extremely bearish on a low-funded startup to take on that liability. We don't know what the liability is, right? The big question is, you can have a functional insurance and malpractice market on things that you understand the actuarial curve for. We have no idea what the actuarial curve is for AI. Like. Someone's going to have to make some real big mistakes before we can adequately price in any of those.

 

[ 00:56:59,560 ]And and and Akash, you know this. I mean, Errors in healthcare, it's rarely one error. It's a series of things that just all lump in together that eventually lead to a bad outcome because there's multiple points of failure within the system. I don't know. I mean, if a company came to me with something like that, they'd have to go a long way to proving it. I mean, that sounds like a like a big fib, like you're saying, unless they have, you know, the cash and a system in place to actually prove to me that they're going to indemnify me. You know, I think it's a high hurdle for somebody to convince me that they were actually capable of doing that, especially as a startup, like you were saying.

 

[ 00:57:42,390 ]I think what could be interesting, though, is that if there's a malpractice company that says, 'Hey, I can give you lower malpractice rates if you would use this AI co-pilot. You know, in your clinical care and kind of almost like the self-driving model we talked about, where it's like, 'Hey, when you're using the co-pilot, your malpractice rate is this. When you stop using the co-pilot, your malpractice increases.' I think that would be an interesting company. And I think it would kind of go towards proving this concept of 'can an AI drive by itself?' Because now you're essentially taking the co-risk with me, if that's the lack of a better word. Yeah, but then there's still also, you know, as the provider, you have to trust the system because.

 

[ 00:58:28,240 ]You know, my practice is not really like— I don't really think in terms of, 'am I going to get sued?' And like, how much money is that going to cost me? I think in terms of, I want to do the best I can to not cause harm to a patient. And so there's still. The non-dollar and cents part of it is like, how much do I trust this system to actually make me better at my job? Yeah. But I think it's interesting because, like, a lot of physicians are hired by contracting groups and the contracting group pays for your malpractice. So, if I was like an InVision healthcare or, or a team health, or one of these large groups that employs thousands of physicians at scale, I would look into a company that shared my malpractice because they pay a large sum of money every single year and just malpractice premiums.

 

[ 00:59:14,090 ]And so, if you're saying that, hey, if you can have my premium out the door, all I have to do is have my physicians use this co-pilot, um, mandatorily or, or, you know. In some level of enforcement, I think there could be an opportunity there. No, I'm not saying that there's not an opportunity. I'm just saying, like, you have to look at that—this is the insurance liability part of it. And then there's the side of it that is, is this actually, can I actually trust this to help me take better care of patients? Or, in an extreme situation, can I really trust this thing to take care of my patients instead of me? Yeah. So, I mean, there's two parts to it.

 

[ 00:59:53,710 ]I mean, there's the medical legal part to it and there's the, you know, the sort of frontline care part of it. They're, you know, related but different. But, like, if I was Open Evidence or one of these companies, that would kind of be my next, you know, that's the next frontier, right? Like, that is the true holy grail of healthcare. could prove that out to your point where you need— you know, multi-year, uh, utilization kind of metrics to be, like, 'oh, this is good enough,' I think. Open Evidence is almost there— maybe you know, a year and a half, maybe two years now. Um, give it another two, you have this kind of, even though it's trained on more data than any resident ever could be.

 

[ 01:00:32,590 ]It's still hit this kind of, you know, amorphous four years of training that we talk about for residency. It's been on the market for four years. I think, if they're like, 'hey, you know, we raise a series, whatever, E, of a billion dollars to be this, you know, bankroll if we need it to be,' then we now do co-malpractice. I could see a future where that can happen. Yeah, no, I definitely can see a future where it can happen. It's just— and I don't I want to retract a little bit what I previously said about like the four years of training thing, just in the course of this discussion, because I think the element of that is.

 

[ 01:01:10,530 ]The number of cases that a trainee is exposed to, you know, and if you can do that in terms of just the training part of it. And people that graduate from residency still make plenty of mistakes, but if you can. You know if you if the equivalency would really be between the patients and the situations is exposed to, rather than the you know the gross amount of time it may take time for the human to be comfortable that the system knows what it's talking about, but in terms of gathering the information and getting the exposure that it needs to deter from the situation. You know, obviously that can be done a lot faster than it can with, you know, a single person going through a residency program. Let's end on a prediction, guys.

 

[ 01:01:51,360 ]I think, in the year 2028, we will have an insurance company that will underwrite malpractice insurance for a solely autonomous AI clinician. When do you think that will happen, Hugh and Akash and Chris? You said 2028. I said 2028. I'll buy 2028. What about you, Akash and Chris? I think. If I was the next presidential candidate, I would run with this as my health care strategy to say I will decrease health care costs. By having a AI entity be your insurance agency to decrease your premiums slash how much you get paid out. I think that I think 2028 is the next cycle. So I think. I guess 2029 is when it would come to fruition. And what about you, Chris?

 

[ 01:02:48,620 ]And I have a mea culpa and ask you to repeat that because I had somebody at the door and I missed like half of it. When will insurance underwrite the liability and provide malpractice insurance for a fully autonomous AI clinician? I think, I mean, I think by 2028 it's feasible. The question is like, what are the unintended consequences of doing that? Like, um, I think somebody certainly will probably do it by then. The question is, are healthcare providers enough out of, and the institutions and the hospitals and the other people involved enough out of the line of fire that they're going to find that acceptable and they're not going to get swept up in. swept up in litigation anyway. I mean, what happens when somebody sues?

 

[ 01:03:34,460 ]The attorney comes in and sues absolutely anyone whose name is anywhere near that chart. So the question will be yes you can implement it but is it going to work out in terms of shielding the providers from and the other you know the other entities involved from from their own liability so does it actually really change anything or does it just give another target you know additional target, you know, to sort of extract compensation for the patient and, you know, fees for attorneys. I will make a shorter term prediction. Before this happens, I suspect what's going to happen is that there's just going to be everyone's agentics fighting each other. Where the patient's going to have an AI to like make sure all the billing's fine and it's going to send bills.

 

[ 01:04:16,820 ]Then the healthcare company insurer will have to have an AI to try and bat it around. The provider will have to have an AI to gather the documentation. There's going to be an entire shadow AI war in healthcare. It will happen, all completely oblivious to the patient, but racking up costs left, right, and center. And maybe in the middle, Akash and Rashad and I can still take care of some patients. No, Hugh, I strongly agree with that. I think that's the number of companies that have come out. In the last year on both ends of that spectrum— slash one trying to build more, one trying to build less. It's 100% hey, our model is the best model that's come out recently and then we will be able to build better and then vice versa.

 

[ 01:05:00,010 ]I agree with that. I mean, I think just to add that one of the questions that I think is going to become more pointed is a lot of healthcare startups try to be. Everyone's friend. The era of being everyone's friend is over. You're going to have to pick a side, and you're going to be the insurance company. company or the patient's company—I don't think. In a world where you can have an AI chase down every single last item of the aspirin on the bill and be like, 'That's outside of the range for this there's any such thing as. I don't think you can play to two of those parties at once. I think it's going to be a fight. Like, you've got to choose your side. Is coming. Yeah, because this goes back to what I was saying. The system is adversarial enough to begin with. This is the trust problem that nobody really trusts anyone anymore. So I would agree with you on that. Let's end here, guys. I do have to jump off in a minute.