I chose yes because I think human will still be involved, but I don't think it will be a doctor in many cases. Other medical professionals can monitor the AI's recommendations and alert the supervising doctor if something seems off. I think this will be great for treatment plans that are well established and lower risk (check BMI, family hx, and maybe blood pressure, then send weight loss Rx).
For more complex treatments, I think doctors will use AI for recommendations but they will still be the ones making the decisions in real time.
So I would say the Yes's have it, it's simply a question of time not if. Medical schools are looking at how they can better-prepared graduates to use AI and Generative AI tools by presenting information for review and subsequently interpreting the results. Times are changing, either adapt and change or be left behind. Healthcare organizations have a window of time to plan and prepare, but my experience is the majority of them will wait and then it will be a crisis.
I agree with the comments related to liability, but the expectation is that the tech gets to 100% of accuracy, Overall, medical providers misdiagnose diseases about 11% of the time as reported in USA News which translates 12M. So if AI improves it to 95% that's diagnosing 720,000 additional cases correctly. Food for thought.
Already Happening... - 1 year ago
We're already seeing this happen with the growing number of direct-to-patient apps and "telehealth" services. The app developer will employ one physician on staff so that they can claim some amount of human review. In practice, the AI app does all of the work. I think that we'll see more and more of that happening very soon, with apps pushing the boundaries of what is "Treatment", the problem ultimately being HIPAA and what legally counts as Treatment (i.e., a human making the final decision). I think we will very soon start seeing this more widely debated. But until then, apps are already popping up to do this, without people even realizing or understanding what's happening.
Former EMT - 1 year ago
While I agree with Dn Greenberg, I would offer a few counterpoints. Clinical treatment has long been delegated under the guises of a Medical Director. EMTs, in particular, are trained in a variety of diagnostic skills, treatments, and interventions, and are allowed to act autonomously within guardrails set by a physician Medical Director. Effectively, this physician is the one carrying the liability, and this model has works successfully for decades. The same model is used for nursing and other fields. So one way to keep a "human in the loop" is that the AI functions under Medical Direction, making autonomous decisions that are audited after the fact.
Additionally, there are lots of low-risk treatments for low-risk conditions with low-barriers of entry. Consider HIMS, selling Men's erectile dysfunction medications based on an online "assessment". The aggregate risk of a bad approval is pretty low, especially compared to the revenue opportunities. In fact, those decisions could already be 100% AI automated now and most would never know.
We're in the ML/NLP space... and *we* insist on human in the loop. There are lots of reasons why, but one simple one: product liability. AI will not be autonomous until someone takes the liability if it's wrong, until AI malpractice is tried in the courts and until there's AI malpractice insurance. Until then, any software company that takes the liability is foolish... and any hospital administrator that takes the liability is going to lose insurance coverage.
By the way - we're a coding company. We stay far from the clinic despite proven results in sepsis prediction.
I chose yes because I think human will still be involved, but I don't think it will be a doctor in many cases. Other medical professionals can monitor the AI's recommendations and alert the supervising doctor if something seems off. I think this will be great for treatment plans that are well established and lower risk (check BMI, family hx, and maybe blood pressure, then send weight loss Rx).
For more complex treatments, I think doctors will use AI for recommendations but they will still be the ones making the decisions in real time.
So I would say the Yes's have it, it's simply a question of time not if. Medical schools are looking at how they can better-prepared graduates to use AI and Generative AI tools by presenting information for review and subsequently interpreting the results. Times are changing, either adapt and change or be left behind. Healthcare organizations have a window of time to plan and prepare, but my experience is the majority of them will wait and then it will be a crisis.
I agree with the comments related to liability, but the expectation is that the tech gets to 100% of accuracy, Overall, medical providers misdiagnose diseases about 11% of the time as reported in USA News which translates 12M. So if AI improves it to 95% that's diagnosing 720,000 additional cases correctly. Food for thought.
We're already seeing this happen with the growing number of direct-to-patient apps and "telehealth" services. The app developer will employ one physician on staff so that they can claim some amount of human review. In practice, the AI app does all of the work. I think that we'll see more and more of that happening very soon, with apps pushing the boundaries of what is "Treatment", the problem ultimately being HIPAA and what legally counts as Treatment (i.e., a human making the final decision). I think we will very soon start seeing this more widely debated. But until then, apps are already popping up to do this, without people even realizing or understanding what's happening.
While I agree with Dn Greenberg, I would offer a few counterpoints. Clinical treatment has long been delegated under the guises of a Medical Director. EMTs, in particular, are trained in a variety of diagnostic skills, treatments, and interventions, and are allowed to act autonomously within guardrails set by a physician Medical Director. Effectively, this physician is the one carrying the liability, and this model has works successfully for decades. The same model is used for nursing and other fields. So one way to keep a "human in the loop" is that the AI functions under Medical Direction, making autonomous decisions that are audited after the fact.
Additionally, there are lots of low-risk treatments for low-risk conditions with low-barriers of entry. Consider HIMS, selling Men's erectile dysfunction medications based on an online "assessment". The aggregate risk of a bad approval is pretty low, especially compared to the revenue opportunities. In fact, those decisions could already be 100% AI automated now and most would never know.
We're in the ML/NLP space... and *we* insist on human in the loop. There are lots of reasons why, but one simple one: product liability. AI will not be autonomous until someone takes the liability if it's wrong, until AI malpractice is tried in the courts and until there's AI malpractice insurance. Until then, any software company that takes the liability is foolish... and any hospital administrator that takes the liability is going to lose insurance coverage.
By the way - we're a coding company. We stay far from the clinic despite proven results in sepsis prediction.