JAMA follow-up: Benefits and Risks of Machine Learning Decision Support Systems
Published:
JAMA just published a series of responses to the “Unintended Consequences of Machine Learning in Medicine” editorial that had me pretty disappointed a few weeks ago. My rapid-fire thoughts on each of the (very short!) articles:
Berner (EdD) and Ozaydin (PhD)
Although the cautions raised in the Viewpoint by Cabitza and colleagues should not be ignored, they also should not be used to inhibit the development of potentially innovative systems that can improve clinical care.
Yes, and approach to ML and healthcare, not no, but! Hurray!
Licitra (MD) et al
… the unintended consequences can be viewed as opportunities to drive methodological changes in ML-DSS to improve health care.
We agree with Dr Cabitza and colleagues that much needs to be done before ML-DSS can be safely used; however, we do not consider the methodological difficulties as reasons to shy away from the theoretical challenges created by this technology.
Yes, use your knowledge and foresight to improve ML, not fight against it! That said, this rebuttal focuses a lot on interpreting the black box that is machine learning algorithms, which I’m not totally convinced is the best place to put efforts. Interpretability is important, but I’d rather also focus on ensuring that we have equitable data and ways to identify bias, etc. If I trust that my algorithm is unbiased and/or accurate, I’m less worried about understanding the “why” (especially given that human decision-making is also a relatively uninterpretable black box…)
Lasko (MD, PhD) et al
We argue that the negative consequences described by the authors are more often a product of misuse of machine learning, rather than anything intrinsic to its methods.
To us, the salient message of this conversation is the need for highly trained medical data scientists who have a deep understanding of both clinical medicine and computational methods.
Yes! We need clinicians and computationalists to work together and learn to speak each other’s languages!
Huesch (MBBD, PhD)
The authors’ conclusion raises the bar for artificial intelligence higher than that for new pharmaceuticals, medical devices, or changes in care delivery.
It is also true that good artificial intelligence is often a black box, but so is the physician gestalt that drives much human decision making.
Meh, I actually think it’s pretty fair to hold AI to a higher standard than humans, especially where AI only provides marginal benefit to the human equivalent. Questions of responsibility are more easily answered when humans are making decisions, so to me that makes the burden of proof justifiably lower for humans than machines.
Fogel (MBA) and Kvedar (MD)
The unintended consequence of artificial intelligence in medicine may mean that physicians will be able to focus on the tasks that are uniquely human: building trust-based relationships and applying reason and judgment to complex problems to help individual patients.
Yes! Use machines to do the things machines are better at so humans can focus on doing the things humans are better at!
Cabitza (PhD) et al, original authors
Solutions to open the black box of ML-DSS can be considered technical, and we expect they will be refined in the future.1 More important are sociotechnical solutions, which must be endorsed and promoted at the management and policy level. These solutions are partly methodological, as pointed out by Licitra and colleagues, and partly related to training physicians better, as suggested by Berner and Ozaydin.
To me this reads like “Yeah, they all bring up good forward-looking points but we were still right and we’re still grumpy.” tbh not sure what to make of this response, but I don’t care that much because the rest of the conversation is really good! And in JAMA! Yay!