Home Publications Blog Beat The Streak Resume
May 19, 2021
Artifical Intelligence in Medicine
Joseph Bae

WILL DOCTORS BE REPLACED BY ROBOTS?


The beginnings of modern computational artificial intelligence can be traced back to the mid to late 20th century as basic machine learning algorithms and deep learning frameworks began to be developed. Even as early as the 1970s, an article in the New England Journal of Medicine titled "Medicine and the computer — the promise and problems of change" [1] began to predict the promise that advanced computation might have on the field of medicine.

"Computing science will probably exert its major effects by augmenting and, in some cases, largely replacing the intellectual functions of the physician."
- New England Journal of Medicine, 1970.

Now as a budding researcher and medical student, it is astounding to me that machine learning has not been more widely integrated into clinical practice, particularly when confronted with the high rates of medical mistakes that occur in hospitals nationwide. A widely cited 2016 study performed by researchers at Johns Hopkins University [2] found that medical errors might be the third leading cause of death in the US. While there has been some pushback against this figure due to the perhaps too simplistic approach the article takes, it is clear that too many lives are lost each year due to preventable medical mistakes.

The breadth and severity of these mistakes is sometimes shocking (wrong site surgical procedures and retained surgical instruments come to mind), but even the rate at which incorrect diagnoses and treatments are delivered by physicians (estimated to be around 15% [3]) should be an obvious problem for artificial intelligence to correct. For the remainder of this post I will investigate two reasons that might explain its failure to do so.

Lack of Data and Data Privacy

The most obvious challenge to widespread use and adoption of artificial intelligence in medicine is the general dearth of high-quality data.

There are numerous reasons for this, but among the most important is the threat of privacy breaches associated with studying private patient information. You are probably aware of the Health Insurance Portability and Accountability Act (HIPAA), under which protocols and standards have been set to safeguard personal health information. These protections are imperative for a functioning healthcare system, but also make it difficult for researchers to obtain access to patient data to perform studies. This is particularly true for those wishing to conduct research on data from multiple hospitals or institutions, something that is vital to proving the generalizability and robustness of AI techniques in medicine. Multi-institutional datasets are also critical when studying low-prevalence or novel diseases, something that was observed at the start of the COVID-19 pandemic. While ongoing research into decentralized deep learning techniques is being performed (federated learning, distributed learning, split learning, etc.) there are still few widely adopted approaches to studying data siloed at different hospitals and institutions. Publicly available, anonymized datasets are helpful, but come with their own issues including lack of clinical information, difficulty in verifying authenticity, etc.

Privacy concerns aside, many current electronic medical record and picture and archiving systems (PACS--systems for storing medical images including x-rays, MRIs, etc.) simply were not built with high-volume data transfer and study in mind. Obtaining patient data from these systems is often a painstaking process that can sometimes make certain studies impossible.

There are certainly many other problems that compound these issues and reduce the amount and quality of data that might be available for research (non-uniform patient-handling and record-keeping procedures, resistance to data sharing in order to retain an academic monopoly, etc), but from my understanding it is ultimately access to existing data that most directly impedes progress in medical AI research.

Trust and Resistance to Change

In my mind, the second most important issue facing widespread use of AI in medicine is the (perhaps justified) attitude towards machine learning techniques that is prevalent among healthcare providers and administrators. Recent surveys have suggested that physicians, medical students, and healthcare administration personnel all have mixed confidence in AI as a healthcare tool [4]. My own anecdotal experiences with colleagues and mentors has revealed a fair amount of distrust as well.

These doubts are not completely unfounded. Despite roots dating back to the 1950s, artificial intelligence is still a relative mysterious topic to many, and current techniques do not always have the interpretability and transparency that might be desired in tools used to make medical decisions (see the black box of deep learning).

The above are valid reasons opposing the immediate use of AI in medical decision-making. There is perhaps some ways to go before we can truly understand the mechanism of many AI techniques in medicine, and robustness and generalizability of any potential application must be proven on patients from a wide variety of geographic locations. However, personal reasons might also be present in the opposition to AI use in medicine.

Like other fields, AI has the potential to replace healthcare providers in the future, at least to an extent. There are already numerous studies suggesting improved performance of AI models over physicians, particularly in the area of medical imaging (where I conduct my own research). While we might not be replacing radiologists in the immediate future, there may be some concern from a minority that advances in AI might prove deleterious to future job prospects. In my humble opinion, this is largely unfounded; I strongly believe that AI techniques will be most commonly instituted to complement rather than supplant humans (at least at first). Nevertheless, mistrust in the relatively unexplored power of AI may be being complemented by a dose of concern for personal occupational preservation.

In Conclusion

Upon rereading this post I have become aware that some of what I have written may be construed as over-dramatic and sensationalized. That's probably in part due to my style of writing and own opinions about this subject (I want to reiterate that I am not yet an expert on these subjects, and don't claim complete impartiality in my writings). I do believe that AI should be further integrated into medicine, but I must also acknowledge the ways in which it is already being employed. In radiology and radiation oncology, ~80 FDA approved AI tools and algorithms exist to aid in diagnosis, treatment planning, screening, etc. [4] However, a 2020 survey conducted by the American College of Radiology found that around 30% of respondents (all radiologists) actually used AI in their practice, and over 90% of those that did reported that these tools were inconsistent/unreliable [4]. Clearly, there is room for improvement in the development and adoption of these tools.

In my opinion, the two most critical steps forward will be the continued development of protocols, techniques, and regulations to allow further study of clinical data from multiple institutions and increased opportunities to demonstrate the scientific soundness of AI approaches. There is an abundant amount of valuable clinical data out there; we simply need to develop the policies and tools that will allow us to make full use of it. Similarly, large-scale, robust experiments and clinical trials must be performed to showcase the potential for AI in medicine and to strengthen our trust in the generalizability and effectiveness of these tools in improving healthcare. I don't think that robots will be replacing doctors anytime soon, but I do think that moving in that direction will greatly improve the quality of patient care across the globe.

Works Cited

(1)
Schwartz, W. B. Medicine and the Computer. New England Journal of Medicine 1970, 283 (23), 1257–1264. https://doi.org/10.1056/NEJM197012032832305.
(2)
Makary, M. A.; Daniel, M. Medical Error—the Third Leading Cause of Death in the US. BMJ 2016, 353, i2139. https://doi.org/10.1136/bmj.i2139.
(3)
Berner, E. S.; Graber, M. L. Overconfidence as a Cause of Diagnostic Error in Medicine. The American Journal of Medicine 2008, 121 (5), S2–S23. https://doi.org/10.1016/j.amjmed.2008.01.001.
(4)
Allen, B.; Agarwal, S.; Coombs, L.; Wald, C.; Dreyer, K. 2020 ACR Data Science Institute Artificial Intelligence Survey. Journal of the American College of Radiology 2021, 0 (0). https://doi.org/10.1016/j.jacr.2021.04.002.