Enough Artificial Intelligence to Go Around: Policies Surrounding A.I. in Healthcare

Artificial intelligence (AI) may seem like a type of system developed to be free of human biases, but in reality the performance and output of AI systems depends heavily on the information given to train them. Take, for example, the recent report that an AI program designed by Amazon to search employee applications for the best candidates ended up discriminating against female applicants because the program was trained on mainly male applications [1]. The information used to train AI systems can be laden with unseen characteristics that incorporate biases into the system output, and this can occur across the spectrum of AI utility, including healthcare.

Images compare skin cancer (melanoma, left) to normal moles (right). The initial diagnosis for melanoma is often visual, making it a key cancer to target using artificial intelligence systems. Esteva et al. [3] developed an artificial intelligence program to correctly assign images of moles as cancerous or not, with an ultimate goal of using a smartphone camera image to make an initial diagnosis before seeing a dermatologist.

Images compare skin cancer (melanoma, left) to normal moles (right). The initial diagnosis for melanoma is often visual, making it a key cancer to target using artificial intelligence systems. Esteva et al. [3] developed an artificial intelligence program to correctly assign images of moles as cancerous or not, with an ultimate goal of using a smartphone camera image to make an initial diagnosis before seeing a dermatologist.

AI in healthcare has the potential to revolutionize the way we get access to medication, treatment, and diagnoses. There are various uses for which AI has been proposed in healthcare. Some major topics include detecting cancer and other diseases from images such as X-rays, or even photos taken on a cell phone [2, 3]; treatment or prevention of disease using electronic health record data [4, 5]; and assistants to aid with routine tasks that can free up medical providers for more important or urgent tasks [6]. These are all exciting advancements that researchers, physicians, and patients can envision in the hospital soon, with some tools already being implemented. However, initial AI developed for healthcare are highly susceptible training biases reminiscent of the Amazon hiring issue. For example, a startup in Toronto that developed an algorithm to detect signatures of speech decline associated with Alzheimer’s noticed that the system only worked for individuals who spoke English as a first language, and even more specifically, for individuals that speak a certain English dialect [7, 8]. The majority of research into human diseases is done in individuals of European ancestry [9], and so the information given to train AI systems for healthcare is best able to identify characteristics of disease for these individuals. Sex and ethnic differences can play a substantial role in disease causes, progression and outcomes, and lack of diversity in human research can lead to biased output from AI algorithms.

In addition to the need for diverse research and training of AI systems, it should be noted that AI in healthcare is a provision, like other healthcare services. Lack of research and access to healthcare are key factors contributing to healthcare disparities already present today, affecting specific populations such as certain ethnic groups, socioeconomically disadvantaged, or disabled individuals [10, 11]. One of the most well-known examples of this is sickle cell disease (SCD), which is associated with poor health outcomes resulting from inaccessible healthcare and insufficient treatment, stemming from relatively scarce research and physician education. Approximately 100,000 people in the U.S. have SCD, with the majority of the carriers of the genetic mutation African American or black. Furthermore, ~60% of SCD patients have Medicaid, which is generally not accepted by healthcare providers in regions where Medicaid is most widely used [12, 13]. Health disparities such as SCD can be further exacerbated or even newly introduced through AI systems in healthcare. As the race to implement AI strategies in nearly every industry has taken the world by storm, it is particularly important for healthcare initiatives to focus on diversity and inclusion in research and access to quality healthcare.

Many of the developed nations across the world have published detailed and extensive plans for becoming the leaders in AI [14]. The major topics of government guidelines available to the public include excellence and competition in teaching, developing, and producing AI, in addition to equity and ethics. While plans for the former have been concrete and implementable in the foreseeable future, the discussions surrounding equity and ethics remain as considerations and recommendations, with a clear lack of tangible policies in place. For instance, the EU has detailed Horizon 2020 program investment increases; the initiation of the European Innovation Council with a starting budget of €2.7M for AI investments; and other initiatives [15]. The ethics and equity section discusses the need for women and minorities in the research and development (R&D) aspect of AI; however the implementation of this in the form of guidelines or policies in not clearly delineated. Finland’s AI plan eloquently describes the need for equity and equality in distribution of the benefits of AI, yet the plans to implement programs or oversight committees are lacking [16]:

The requirement of responsibility also means that decision-making may not exacerbate structural inequalities found in society or otherwise do injustice or damage or cause suffering to individuals or groups of people.

It is encouraging that equity and ethics topics are being addressed, because this has not always been on the forefront of discussions in science and technology. However, this is not enough, as evidenced by similarities in current human disease research: Most studies are done in the largest currently collected human cohorts, which include mainly Caucasian people [9], despite the known need and encouragement for research in diverse populations. Without a concrete plan in place, it cannot be assumed that solely awareness of the need for equity, inclusion, and equality will lead to the achievement of these objectives.

 

Along with rapid technological developments comes the need to consider their immediate impact and develop policies to keep them safe, competitive, and fair. Monitoring is necessary to ensure that the AI systems being developed for healthcare are inclusive and not biasing outputs or missing key signs of disease due to the information on which the system was trained being incomplete. It is the job of the government and policy-makers, as well as the AI researchers and developers, to recognize this need for constant and frequent supervision to avoid these types of issues going unnoticed. Rather than just considering guidelines for ethics and equality in AI for healthcare, we need to truly prioritize implementing policies before rolling out new programs and technologies.

 

Kristina Garske

Staff Writer, Signal to Noise Magazine

PhD Candidate, Genetics and Genomics, UCLA

 

References

[1] Hamilton, I. Amazon built an AI tool to hire people but had to shut it down because it was discriminating against women. https://www.businessinsider.com/amazon-built-ai-to-hire-people-discriminated-against-women-2018-10 (2018).

[2] Xiong et al. Automatic detection of mycobacterium tuberculosis using artificial intelligence. J Thorac Dis. 10, 1936-1940 (2018).

[3] Esteva et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 542, 115–118 (2017).

[4] Murray et al. Using Spatial and Temporal Mapping to Identify Nosocomial Disease Transmission of Clostridium difficile. JAMA Intern Med. 177, 1863-1865 (2017).

[5] Flagler Hospital uses AI to create clinical pathways that enhance care and slash costs. https://www.healthcareitnews.com/news/flagler-hospital-uses-ai-create-clinical-pathways-enhance-care-and-slash-costs (2018).

[6] Using artificial intelligence to make a real difference. https://www.nuance.com/healthcare/artificial-intelligence.html (2018).

[7] Frazer, K.C., Meltzer, J.A. & Rudzicz, F. Linguistic Features Identify Alzheimer's Disease in Narrative Speech. J Alzheimers Dis. 49, 407-22 (2016).

[8] Gershgorn, D. If AI is going to be the world’s doctor, it needs better textbooks. https://qz.com/1367177/if-ai-is-going-to-be-the-worlds-doctor-it-needs-better-textbooks/ (2018).

[9] Popejoy, A.B. & Fullerton, S.M. Genomics is failing on diversity. Nature (Comment). 538, 7624 (2016).

[10] U.S. Department of Health & Human Services. Elimination of Health Disparities. https://www.surgeongeneral.gov/priorities/prevention/strategy/elimination-of-health-disparities.html (2017).

[11] Feagin, J. & Bennefield, Z. Systemic racism and U.S. health care. Soc Sci Med. 103, 7-14 (2014).

[12] Center for Disease Control and Prevention (CDC). Data & Statistics on Sickle Cell Disease. https://www.cdc.gov/ncbddd/sicklecell/data.html (2017).

[13] Stone, J. Sickle Cell Disease Highlights Racial Disparities In Healthcare. https://www.forbes.com/sites/judystone/2015/06/19/sickle-cell-disease-highlights-racial-disparities-in-healthcare/#20ea1f7d3b75 (2015).

[14] Dutton, T. An Overview of National AI Strategies. https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd (2018).

[15] European Commission. Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions. http://ec.europa.eu/newsroom/dae/document.cfm?doc_id=51625 (2018).

[16] Publications of the Ministry of Economic Affairs and Employment. Work in the age of artificial intelligence. http://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/160980/TEMjul_21_2018_Work_in_the_age.pdf (2018).