TY - JOUR
T1 - The need for uncertainty quantification in machine-assisted medical decision making
AU - Begoli, Edmon
AU - Bhattacharya, Tanmoy
AU - Kusnezov, Dimitri
N1 - Publisher Copyright:
© 2019, This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply.
PY - 2019/1/1
Y1 - 2019/1/1
N2 - Medicine, even from the earliest days of artificial intelligence (AI) research, has been one of the most inspiring and promising domains for the application of AI-based approaches. Equally, it has been one of the more challenging areas to see an effective adoption. There are many reasons for this, primarily the reluctance to delegate decision making to machine intelligence in cases where patient safety is at stake. To address some of these challenges, medical AI, especially in its modern data-rich deep learning guise, needs to develop a principled and formal uncertainty quantification (UQ) discipline, just as we have seen in fields such as nuclear stockpile stewardship and risk management. The data-rich world of AI-based learning and the frequent absence of a well-understood underlying theory poses its own unique challenges to straightforward adoption of UQ. These challenges, while not trivial, also present significant new research opportunities for the development of new theoretical approaches, and for the practical applications of UQ in the area of machine-assisted medical decision making. Understanding prediction system structure and defensibly quantifying uncertainty is possible, and, if done, can significantly benefit both research and practical applications of AI in this critical domain.
AB - Medicine, even from the earliest days of artificial intelligence (AI) research, has been one of the most inspiring and promising domains for the application of AI-based approaches. Equally, it has been one of the more challenging areas to see an effective adoption. There are many reasons for this, primarily the reluctance to delegate decision making to machine intelligence in cases where patient safety is at stake. To address some of these challenges, medical AI, especially in its modern data-rich deep learning guise, needs to develop a principled and formal uncertainty quantification (UQ) discipline, just as we have seen in fields such as nuclear stockpile stewardship and risk management. The data-rich world of AI-based learning and the frequent absence of a well-understood underlying theory poses its own unique challenges to straightforward adoption of UQ. These challenges, while not trivial, also present significant new research opportunities for the development of new theoretical approaches, and for the practical applications of UQ in the area of machine-assisted medical decision making. Understanding prediction system structure and defensibly quantifying uncertainty is possible, and, if done, can significantly benefit both research and practical applications of AI in this critical domain.
UR - http://www.scopus.com/inward/record.url?scp=85063370431&partnerID=8YFLogxK
U2 - 10.1038/s42256-018-0004-1
DO - 10.1038/s42256-018-0004-1
M3 - Review article
AN - SCOPUS:85063370431
SN - 2522-5839
VL - 1
SP - 20
EP - 23
JO - Nature Machine Intelligence
JF - Nature Machine Intelligence
IS - 1
ER -