top of page

Scenes from a Literature Review: Healthcare and AI

Updated: Mar 23, 2022


I'm having that moment of guilt when you realize that your last blog was close to two months ago! Time to dust off the cobwebs with an update.

Late 2019 was spent writing my literature review. The rest of the year was spent recovering from writing my literature review! This major element marks the completion of the coursework for my degree and I can now focus exclusively on my research project. My literature review will undergo revisions and additions but it forms the foundation for my research project.

The focus of my research is bringing an applied ethical framework to the development of AI applications within a healthcare context. Much of the work that I'm doing now involves building relationships within the healthcare system and within the AI research community. Next week, I'll be at the CIFAR townhall for AI and Health. I'm at AI, Ethics and Society in NYC in early February and I'll be delivering a talk about my research progress thus far at our student conference, the Rundle Summit, in late February.

My entire literature review is very long so I thought I'd share an excerpt from the section on healthcare.

Healthcare and AI

Lower costs, improved patient outcomes and reduced workloads for clinicians – these are the promised outcomes that represent an AI “holy grail” in solving the global healthcare crisis. Companies like Babylon Health, whose mission is to make healthcare affordable and accessible to every person on earth, are tapping into the power of big data and algorithms to deliver better care by tracking and monitoring health information in real time.

AI is shifting both the approach to health research and healthcare delivery. “Instead of extrapolating from the data obtained from a small number of samples…we can now use clinical data at the population level to provide a real-world picture.” (Ngiam & Khor, 2019, p. e263). Medicine data sets include genetic information, imaging scans and real-time outputs from wearable sensors all of which needs AI in order to turn it into useful information. (Topol, 2019) In the US, the Food and Drug Agency (FDA) has already approved the use of algorithms for a range of medical interventions in areas as diverse as monitoring heart health, early intervention of stroke and detection of wrist fractures. (Mason, Morrison, & Visintini, 2018) In Canada, Radiology, Pathology and Dermatology have been identified as key areas where AI can play an important role as they align well with machine learning techniques focused on image detection and pattern recognition. (Mason et al., 2018) The upside potential for AI in healthcare is enormous, but there are downside consequences that need to be addressed, as the following examples illustrate.

Discriminatory data and poor design. Small errors in AI systems can have life and death consequences in the realm of healthcare. A popular commercial healthcare algorithm was shown to be biased in recommending less healthcare coverage for black vs white patients because of its use of healthcare spend as a proxy for the need for healthcare. (Obermeyer, Powers, Vogeli & Mullinathan, 2019) This particular algorithm was rebuilt with new criteria that better represented need and it saw a jump in recommendations for care from 18% to 47% for black patients. (Obermeyer et al., 2019) This error and the harm caused may have been avoidable with workflow processes that reviewed models with an eye towards ethical concerns.

Privacy and consent. The values of privacy and consent deserve special attention when it come to healthcare data. The sharing of 1.6 million patient records by a UK hospital as part of a partnership with DeepMind speaks to the ethical concerns surrounding the need for greater transparency and consent of data use for the development of AI healthcare applications. (Hern, 2017) On the flip side, AI health researchers, such as Marzyeh Ghassemi, have argued that lack of big datasets for research impedes progress and can lead to biased outcomes as inadequate or incomplete datasets are used to train algorithms. (C4E Journal, 2019) Determining how to best navigate the balance between preserving individual rights while enabling societal gains needs to be explored with respect to healthcare data. This is an ethical dilemma that deserves consideration from a broad set of stakeholders.

Impact on care relationships. Perhaps a more subtle issue is how AI impacts the clinician patient relationship. For example, a system can be used to either promote or deter certain ethical ideals, such as shared decision making in healthcare. There is a danger in deferring to a system, which may have one pre-determined way of ranking treatment options, not considering patient values or preferences. (McDougall, 2019) This approach can undermine patient autonomy and participation in decision making, or conversely, if systems consider individual preferences as part of the design process, they could be used to enable and foster this ideal. (McDougall, 2019) Design choices can be made in AI systems to promote values that are deemed beneficial.

This is not an exhaustive list but a few examples that highlight ethical concerns involving AI in healthcare. Outside of clinical settings, consumers are monitoring their own health and wellness, through wearable devices like Fitbit’s or mobile apps that monitor everything from mental health to menstruation. These non-clinical devices are fueled by consumer demand and often receive little if any regulatory oversight, making them particularly vulnerable to misuse or unintended consequences. These consumer health technologies are compiling massive amounts of “granular personal health data” which is then “leveraged to inform personalized health promotion and disease treatment interventions.” (Nebeker, Torous & Bartlett Ellis, 2019, p. 1)

Technology is changing the nature of how we think about and deliver healthcare. In Alberta, we’re in the early days of implementing a new electronic health records system called ConnectCare which brings together data from over 1,300 siloed software systems. (Gerein) ConnectCare is a one stop platform that houses all patient records, giving clinicians a single point of access to information, sharing information amongst a patient’s care team, as well as providing decision support alerts about vital stats and other information directly from patients in real-time. (Gerein). It’s a big step towards aggregating the data needed to deliver personalized, AI-enabled healthcare, making a discussion of ethics particularly timely for AI researchers in Alberta working in the healthcare domain.


By Katrina Ingram _______


Sign up for our newsletter to have new blog posts and other updates delivered to you each month!

Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com © 2020 Ethically Aligned AI Inc. All right reserved.

____________

[C4EJournal] (2019, October 1). Marzyeh Ghassemi, Can Machines Learn from Our Mistakes? [Video File] Retrieved from - https://c4ejournal.net/2019/10/03/marzyeh-ghassemi-can-machines-learn-from-our-mistakes-2019-c4ej-37/

Gerein, K. (2019, November 1). Keith Gerein: AHS’s $1.4-billion gambit to transform the health system begins in Edmonton. Edmonton Journal. Retrieved from https://edmontonjournal.com/news/politics/keith-gerein-ahss-1-4-billion-gambit-to-transform-the-health-system-begins-in-edmonton

Hern, A. (2017, July 3). Royal Free breached UK data law in 1.6m patient deal with Google’s DeepMind. The Guardian. Retrieved from http://www.theguardian.com/technology/2017/jul/03/google-deepmind-16m-patient-royal-free-deal-data-protection-act

McDougall, R. J. (2019). Computer knows best? The need for value-flexibility in medical AI. Journal of Medical Ethics, 45(3), 156–160. https://doi.org/10.1136/medethics-2018-105118

Mason., J, Morrison, A., & Visintini, S. (2018, September). An Overview of Clinical Applications of Artificial Intelligence. Canadian Agency for Drugs and Technologies in Canada. Retrieved from https://www.cadth.ca/sites/default/files/pdf/eh0070_overview_clinical_applications_of_AI.pdf

Nebeker, C., Torous, J., & Bartlett Ellis, R. J. (2019). Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Medicine, 17(1), 137. https://doi.org/10.1186/s12916-019-1377-7

Ngiam, K. Y., & Khor, I. W. (2019). Big data and machine learning algorithms for health-care delivery. The Lancet Oncology, 20(5), e262–e273. https://doi.org/10.1016/S1470-2045(19)30149-4

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342Pavlus, J. (2019, January 10). A new approach to understanding how machines think. Quanta Magazine. Retrieved from - https://www.quantamagazine.org/been-kim-is-building-a-translator-for-artificial-intelligence-20190110/

Topol, E. (2019) Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York, NY: Hachette Book Group.


bottom of page