I recently asked a "good" model something that had been vexing me, and it gave me such a confident and plausibly detailed answer that I really did think I must have failed my searches. But then I asked for sources, and it literally confessed it had made the whole thing up. And I KNOW I'm an unusually skeptical grump, and not rushed like either clinicians or students!
Between that and the skeptics in my personal feeds, going into work and seeing AI all over healthIT is somewhat dizzying.
Curious - 3 months ago
For those who say that none of these are problems, can you explain your reasoning? If there had been an "All of the above" option, I would have chosen that one. The risks are different with different applications. Ambient listening applications, for example, have a risk of error but it's mitigated if clinicians proofread the notes carefully before signing. (Though they likely won't do so.) Applications that are synthesizing other content that clinicians are unfamiliar with are much riskier if they introduce errors and biases that are accepted as fact. Loss of skills occurred when automated EKG readings were introduced decades ago so it won't be surprising if loss of skills occurs with overreliance on AI. Many things that would be helpful will be limited by added cost or character limitations rather than other factors. Like all new technologies there's likely to be good as well as adverse unintended consequences. To think otherwise seems naive at best.
Sheena - 3 months ago
The biggest risk is still doing all the work to implement an AI tool and then realizing you’re not solving a real problem lol. We still dont take the time to investigate and deeply understand a problem before trying to slap a technical solution on it to “solve” it.
Sam - 3 months ago
Becoming heavily reliant on a technology that's not yet profitable, meaning the cost model is likely to change, is also a big risk. Even if the optimistic capability projections (sales pitches) pan out.
I recently asked a "good" model something that had been vexing me, and it gave me such a confident and plausibly detailed answer that I really did think I must have failed my searches. But then I asked for sources, and it literally confessed it had made the whole thing up. And I KNOW I'm an unusually skeptical grump, and not rushed like either clinicians or students!
Between that and the skeptics in my personal feeds, going into work and seeing AI all over healthIT is somewhat dizzying.
For those who say that none of these are problems, can you explain your reasoning? If there had been an "All of the above" option, I would have chosen that one. The risks are different with different applications. Ambient listening applications, for example, have a risk of error but it's mitigated if clinicians proofread the notes carefully before signing. (Though they likely won't do so.) Applications that are synthesizing other content that clinicians are unfamiliar with are much riskier if they introduce errors and biases that are accepted as fact. Loss of skills occurred when automated EKG readings were introduced decades ago so it won't be surprising if loss of skills occurs with overreliance on AI. Many things that would be helpful will be limited by added cost or character limitations rather than other factors. Like all new technologies there's likely to be good as well as adverse unintended consequences. To think otherwise seems naive at best.
The biggest risk is still doing all the work to implement an AI tool and then realizing you’re not solving a real problem lol. We still dont take the time to investigate and deeply understand a problem before trying to slap a technical solution on it to “solve” it.
Becoming heavily reliant on a technology that's not yet profitable, meaning the cost model is likely to change, is also a big risk. Even if the optimistic capability projections (sales pitches) pan out.