Chapter 35. Ethics of Artificial Intelligence in Transport (Bryant Walker Smith)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.42
- Alawadhi, M., et al. (2020). Review and analysis of the importance of autonomous vehicles liability: A systematic literature review. International Journal of System Assurance Engineering and Management, 11, 1227-1249. https://doi.org/10.1007/s13198-020-00978-9
- This article provides a systematic review of scholarship focused on automated vehicle (AV) liability. The authors note that the greatest emphasis on this topic is found in the fields of law and transport. They find this literature emphasizes large, developed economies. They summarize this research by concluding that liability depends on many situated elements including the level of vehicle autonomy and environmental externalities.
- Andersen, K. E., et al. (2017). Do we blindly trust self-driving cars? In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-robot Interaction (pp. 67-68). Association for Computing Machinery. https://doi.org/10.1145/3029798.3038428
- This paper reports the findings of a study examining the role of trust in the adoption of artificially intelligent technologies. In a study of simulated autonomous driving scenarios, the researchers observed that passengers were often too trusting of AI in cases of emergency where human intervention would have been necessary to prevent harm.
- Baumann, M. F., et al. (2019). Taking responsibility: A responsible research and innovation (RRI) perspective on insurance issues of semi-autonomous driving. Transportation Research Part A: Policy and Practice, 124, 557-572. https://doi.org/10.1016/J.TRA.2018.05.004
- The authors argue that the responsible research and innovation (RRI) framework is useful for insurance companies and policymakers navigating the emergence of a market for semi-autonomous vehicles. Their approach encourages awareness among decision makers of the potential “ethical, societal, or historical” impacts of technology for stakeholders. RRI thus helps to surface and specify how insurers can and should encourage ethical innovation.
- Bonnefon, J. F., et al. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576. https://doi.org/10.1126/science.aaf2654
- This study considers the social dilemmas that arise in autonomous driving accident scenarios and observes the effect of pre-programmed accident decisions on passenger choices in automated vehicles. In six studies, participants favored self-sacrificing utilitarian AVs, but admitted that they would not ride in them. Participants were also shown to disprove of any regulation that enforced a utilitarian regime for AV algorithms, leading researchers to conclude that vehicular fatalities could increase by forgoing safer algorithmic options.
- Borenstein, J., et al. (2019). Self-driving cars and engineering ethics: The need for a system-level analysis. Science and Engineering Ethics, 25, 383–398. https://doi.org/10.1007/s11948-017-0006-0
- This paper argues that individual-level analyses are insufficient for determining the impacts of AI on human life and society. The authors argue that current ethical discussions on transportation and automation must be considered alongside a system-level analysis that considers the interaction between other vehicles and existing transportation systems. The authors observe the need for analysis of instantaneous and coordinated decisions by cars, groups of cars and other technologies, and worry that a rush toward AV’s without coordinated system-level policy and legal considerations could compromise safety and consumer autonomy.
- Coca-Vila, I. (2018). Self-driving cars in dilemmatic situations: An approach based on the theory of justification in criminal law. Criminal Law and Philosophy, 12(1), 59-82. https://doi.org/10.1007/s11572-017-9411-3
- This article considers dilemmatic decisions in the context of automated driving and draws from the logic of criminal law to argue for a deontological approach in algorithmic decision-making. The author argues against the common utilitarian logic on the grounds that the maximization of social utility cannot justify negative interference in a person’s legal sphere under a legal system that recognizes individualistic freedoms, rights and responsibilities.
- Contissa, G., et al. (2017). The ethical knob: Ethically-customizable automated vehicles and the law. Artificial Intelligence and Law, 25(3), 365-378. https://doi.org/10.1007/s10506-017-9211-z
- This article re-considers the notion of pre-programmed AV’s by theorizing the ‘ethical knob’ which enables users to customize their vehicle and choose between various moral principles that would be acted upon by the vehicle in accident scenarios. The vehicle would thus be trusted to act on the user’s decision and the manufacturer would be expected to program the vehicle accordingly. The authors subsequently address the evident issues of ethics, law, and liability that would arise from such a proposal.
- Consilvio, A., et al. (2019). On exploring the potentialities of autonomous vehicles in urban spatial planning. In 2019 6th International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS) (pp. 1-7). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/MTITS.2019.8883388
- This conference paper focuses on how the introduction of autonomous vehicles (AVs) opens opportunities to optimize urban road networks to create more space for more sustainable forms of “soft mobility” like walking and cycling. The authors frame this as a network design problem and conduct a case study to show how non-essential nodes in AV road networks can be repurposed for sustainable and active transport.
- Caro, R. A. (1974).* The power broker: Robert Moses and the fall of New York. Alfred A. Knopf.
- This biography recounts the career of Robert Moses – a prominent public official in the urban planning and development of New York City. As an urban developer, Moses played a significant role in shaping the New York metropolitan area and affected many lives. The author reveals how his planning led to an arid urban landscape full of public housing failures and barriers to humane living, which (among other things) led to his demise. In spite of these concerns, Moses was able to accomplish his ‘ideal’ urban plan which is still felt in New York today.
- Cunneen, M., et al. (2020). Autonomous vehicles and avoiding the trolley (dilemma): Vehicle perception, classification, and the challenges of framing decision ethics. Cybernetics and Systems, 51(1), 59-80.
- This article offers a discussion of autonomous vehicle (AV) ethics that is grounded in state-of-the-art developments. The authors start with the ethics of fine-grained classification and how it enables bias in dilemma scenarios. They then explain how increasing collection and sharing of data could be beneficial to AV safety while simultaneously eroding privacy and ownership.
- Dawid, H., & Muehlheusser, G. (2019). Smart products: Liability, investments in product safety, and the timing of market introduction (CESifo Working Paper No. 7673). CESifo Group. https://www.econstor.eu/bitstream/10419/201899/1/cesifo1_wp7673.pdf
- The authors develop an economic model to understand how product liability impacts innovation. They demonstrate policy trade-offs related to the pace of innovation and the level of safety that result from placing more stringent liability on autonomous vehicle (AV) manufacturers. They conclude that safety regulation is a better option for overall social welfare when compared to the expansion of liability regimes.
- Dietrich, M., & Weisswange, T. H. (2019). Distributive justice as an ethical principle for autonomous vehicle behavior beyond hazard scenarios. Ethics and Information Technology 21(3), 227-239. https://doi.org/10.1007/s10676-019-09504-3
- The authors propose that ethics should be a primary consideration for any algorithmic decision that is made by an autonomous vehicle (AV) rather than only a consideration for crash scenarios. Given that an AV’s actions affect other road users, the authors argue that a framework which mobilizes distributive justice principles is ultimately more desirable than an automated “egoistic decision maker.”
- Douma, F. (2004).* Using ITS to better serve diverse populations. Minnesota Department of Transportation Research Services. https://conservancy.umn.edu/handle/11299/1138
- This report investigates how intelligent transportation systems (ITS) can serve the needs of populations that are otherwise unaddressed by conventional transportation planning. The author observes the current state of transport planning as centralized around the single car and acknowledges that this mode of transport is insufficient for diverse populations where cars may be inaccessible. The author presents demographic and survey data on those who would benefit most from ITS applications.
- Epting, S. (2019). Transportation planning for automated vehicles—Or automated vehicles for transportation planning? Essays in Philosophy, 20(2), 189-205. https://doi.org/10.7710/1526-0569.1635
- This paper considers the trend of transport planning that centers itself around automated vehicles (AVs) rather than incorporating them into existing mobility goals. The author observes that self-driving technology is often perceived as a solution for all urban mobility problems but argues that this view often leads to planning that prioritizes AVs rather than planning that uses AVs as a means to achieve broader transit goals. The author argues that transport developers should instead focus on planning that is human-centric and aims at sustainability and transportation justice.
- Ethics Commission on Automated and Connected Driving. (2017).* Automated and connected driving. German Federal Ministry of Transport and Digital Infrastructure. https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.html
- This publication provides a general overview of the ethical and legal problems of automated and connected driving. It provides twenty guidelines for automated driving and considers ethical and legal policy decisions that must be considered during the programming of autonomous driving software, and how this can be accomplished without displacing the human from the center of AI legal regimes.
- Etienne, H. (2022). When AI ethics goes astray: A case study of autonomous vehicles. Social science computer review, 40(1), 236-246.
- This paper criticizes the use of voting-based systems (VBS) like the Moral Machine (MM) for addressing ethics of autonomous vehicles (AVs). The author argues that VBS relies on aggregate individual preferences which may be biased, misinformed, flippant, and more prone to moral “mistakes” than ethicists. The author sees MM as an illegitimate way of side-stepping ethical discourse to provide safer roads quicker, and further criticizes moral incentives for taking this shortcut as distracting from more pressing social issues.
- Evans, K., et al. (2020). Ethical decision making in autonomous vehicles: The AV ethics project. Science and Engineering Ethics, 26, 3285–3312. https://doi.org/10.1007/s11948-020-00272-8
- The authors propose ‘Ethical Valence Theory’ for decision making in autonomous vehicles (AVs). Within this framework, they argue that one can quantify and hierarchize individual road users’ moral claims relative to an AV to mitigate unethical outcomes in the case of a crash. The authors describe how different road users hold distinct moral claims to safety (for example, pedestrians and passengers hold different claims) before outlining an “ethical deliberation algorithm” that can make decisions based upon this hierarchy.
- Faulhaber, A., et al. (2019). Human decisions in moral dilemmas are largely described by utilitarianism: Virtual car driving study provides guidelines for autonomous driving vehicles. Science and Engineering Ethics, 25(2), 399-418. https://doi.org/10.1007/s11948-018-0020-x
- This article outlines a study that subjected participants to a variety of trolley dilemmas in simulated driving environments. The authors observed that participants generally decided based on some utilitarian principle that spared the greatest amount of harm for all parties. They argue that this study and its results can provide a justified basis for mandatory utilitarian regimes in all autonomous vehicles, as opposed to customized ethical settings which could yield greater harm in accident scenarios.
- Geisslinger, M., et al. (2021). Autonomous driving ethics: From trolley problem to ethics of risk. Philosophy & Technology, 34(4), 1033-1055.
- This article explains limitations of existing ethical theories for autonomous vehicles (AV). The authors focus on ethics of risk in a trolley problem setting and present failure cases for Bayesian (best average risk), Equality (least difference in risks), and Maximin (best worst-case risk) strategies. Finally, they suggest and discuss the benefits of a new objective for ethical AV that is a weighted summation of these three.
- Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and engineering ethics, 21(3), 619-630.
- This article begins by arguing that limiting the responsibilities of autonomous vehicle (AV) manufacturers for AV accidents may ease investment and progress in AV, which could lead to safer roads. The authors propose holding drivers accountable for failure to intervene in accidents, as AV in the near term is not expected to be truly autonomous or reliable. For the case where AV technology is mature, the authors propose a strict liability scheme where consumers share the inevitable risks of AV in the form of a tax or mandatory insurance.
- Himmelreich, J. (2018). Never mind the trolley: The ethics of autonomous vehicles in mundane situations. Ethical Theory and Moral Practice, 21(3), 669-684. https://doi.org/10.1007/S10677-018-9896-4
- Countering the centrality of the “Trolley Problem” in autonomous vehicle (AV) ethics, the author argues that the more banal, operational aspects of AVs are more relevant due to the higher granularity of ethical detail required and the enormous scale of everyday ethical considerations. The author begins with a critique of the Trolley Problem as both exceptional and irrelevant before detailing everyday ethical AV challenges like balancing safety against social outcomes and legal liability.
- Kallioinen, N., et al. (2019). Moral judgements on the actions of self-driving cars and human drivers in dilemma situations from different perspectives. Frontiers in psychology, 2415.
- This article studies how moral judgements on the actions of autonomous vehicles (AVs) vs humans differ in dilemma situations by virtually simulating these situations in user studies. The authors find that moral judgements from different perspectives are mostly similar between these two categories. One significant difference is that participants show less of a self-preservation bias when judging the behavior of AVs, i.e. they consistently prefer AVs to minimize overall harm in dilemma situations.
- Kalra, N., & Groves, D. G. (2017).* The enemy of good: Estimating the cost of waiting for nearly perfect automated vehicles. Rand Corporation.
- The authors describe the risks and rewards of autonomous vehicles and question how safe autonomous vehicles must be before they are deployed for consumer use. The authors use a RAND model of automated vehicle safety to compare vehicular fatalities when self-driving vehicles are cleared for use at various levels of capability relative to human ability. They conclude that waiting for AI technology to improve is never beneficial and leads to higher fatalities and greater human costs.
- Keeling, G. (2020). Why trolley problems matter for the ethics of automated vehicles. Science and Engineering Ethics, 26, 293-307. https://doi.org/10.1007/s11948-019-00096-1
- The author argues in favor of incorporating trolley scenarios into ethical considerations related to autonomous vehicles (AVs). The paper is structured around the author’s refutation of four common arguments against the usefulness of trolley problems: that they are not likely scenarios; that trolley problems ignore salient moral aspects of likely crash scenarios; that trolley scenarios impose a top-down solution onto decision making algorithms that are shaped from the bottom up; and that trolley problems are asking the wrong questions about the moral values that should be programmed into AVs.
- Lim, H. S. M., & Taeihagh, A. (2019). Algorithmic decision-making in AVs: Understanding ethical and technical concerns for smart cities. Sustainability, 11(20), 5791.
- This article examines how algorithmic decision-making in autonomous vehicles (AVs) creates concerns of discrimination and safety. The authors cite bias in machine learning (ML), ongoing disputes on ethical guidelines, and perverse financial incentives as key issues in ethics of AV. They also discuss the important technical challenges of improving contemporary ML perception, control, and safety verification.
- Liu, P., & Liu, J. (2021). Selfish or utilitarian automated vehicles? Deontological evaluation and public acceptance. International Journal of Human–Computer Interaction, 37(13), 1231-1242.
- This article surveys whether people prefer automated vehicles (AVs) that are utilitarian or selfish, and whether they are willing to pay a premium for their preferred type. The authors find that people have a significant self-preservation bias and prefer selfish AVs. The article also exposes negative perceptions of AVs since participants in the study have limited intentions of adopting either AV type.
- Lundgren, B. (2021). Safety requirements vs. crashing ethically: What matters most for policies on autonomous vehicles. AI & Society, 36(2), 405-415.
- This article reviews the shortcomings of studying the ethics of dilemma scenarios for autonomous vehicles (AVs). The author argues that contemporary ethics of crashing are overly idealized and futuristic, face difficulty translating human preferences to machine behavior, and are contentious and prone to bad public preferences. Taking a step back, the author suggests researchers first focus on AV safety, which is necessary for the practical relevance of studying ethical AV crashing.
- Martinho, A., et al. (2021). Ethical issues in focus by the autonomous vehicles industry. Transport reviews, 41(5), 556-577.
- This article studies the engagement of industry in the ethics of autonomous vehicles (AVs) and how it compares to the focus of academia. The author finds that the AV industry avoids directly addressing dramatic and unlikely trolley problem scenarios which are popular in ethics of AV research. On the other hand, the AV industry is more concerned with legal liability, accountability, and safety.
- Millán-Blanquel, L., et al. (2020). Ethical considerations for a decision making system for autonomous vehicles during an inevitable collision. In 2020 28th Mediterranean Conference on Control and Automation (MED) (pp. 514-519). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/MED48518.2020.9183263
- The authors of this conference paper put forth a proposal to “solve the issue” of ethics in autonomous vehicle (AV) decision-making. They outline a pre-programmed AV system with six settings based upon formal ethical theories. They apply these six ethical settings to eight different scenarios, providing an overview of how humans and objects are valued under each setting. Through this framework, the authors propose that AV ethics should be chosen by AV users, within the bounds of the law.
- Millard-Ball, A. (2018). Pedestrians, autonomous vehicles, and cities. Journal of Planning Education and Research, 38(1), 6-12. https://doi.org/10.1177/0739456X16675674
- This article considers the interactions between autonomous vehicles and pedestrians in crosswalk yield scenarios. The author argues (as suggested by a model) that the risk-averse nature of autonomous vehicles will confer impunity to pedestrians, which may cause a transformation from automobile-oriented urban neighborhoods to pedestrian-oriented ones. The author notes that with the increased desirability of walking as a form of transportation in pedestrian-oriented cities, the advantages of autonomous driving systems could become questionable.
- Nyholm, S., & Smids, J. (2018) Automated cars meet human drivers: Responsible human-robot coordination and the ethics of mixed traffic. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9445-9
- This paper discusses issues of ethics and responsibility that arise from coordination problems in mixed traffic conditions between human and self-driven vehicles. The authors compare human and AI driving patterns to argue that there must be more focus on the ethics of mixed traffic and human-AI interaction.
- Othman, K. (2021). Public acceptance and perception of autonomous vehicles: A comprehensive review. AI and Ethics, 1(3), 355-387.
- This article reviews and summarizes multiple studies on public acceptance and perception of autonomous vehicles (AVs). The author finds that acceptance is contingent on the resolution of ethical dilemmas, strong cybersecurity, and lowering the legal liability of AV passengers. The author also finds that males, young people, those with higher education, or those with previous AV experience tend to view AVs more positively.
- Papa, E., & Ferreira, A. (2018). Sustainable accessibility and the implementation of automated vehicles: Identifying critical decisions. Urban Science, 2(1), 5. https://doi.org/10.3390/urbansci2010005
- This article argues that there are a variety of ways that AV’s can impose negative effects on everyday life which must be heavily scrutinized. The authors argue that AV’s have the potential to seriously aggravate accessibility issues, and identify critical decisions that must be made in order to capitalize on the possible accessibility benefits (rather than costs) yielded by AI.
- Rothstein, R. (2017).* The color of law: A forgotten history of how our government segregated America. Liveright Publishing.
- This book provides an analysis of contemporary racial segregation throughout American neighborhoods and argues that this segregation is the result of deliberate government policy rather than commonly referenced factors of wealth and societal prejudice. The author argues that these policies have systematically discriminated against Black communities rendering a direct effect on current wealth and education gaps between Black and white Americans.
- Rhim, J., et al. (2020). Human moral reasoning types in autonomous vehicle moral dilemma: A cross-cultural comparison of Korea and Canada. Computers in Human Behavior, 102, 39-56. https://doi.org/10.1016/J.CHB.2019.08.010
- The authors provide a cultural comparison of how autonomous vehicle (AV) decision-making aligns with different moral values in Korea (as a collectivist culture) and Canada (an individualist culture). Using content from in-depth interviews, the authors identified 32 moral codes and used a k-means cluster analysis to derive 3 moral types. They conclude that the consideration of morality in AV regulation requires attentiveness to cultural sensitivity and pluralism due to differences in the proportion of moral reasoning types among Korean and Canadian study participants.
- Ryan, M. (2019). The future of transportation: Ethical, legal, social and economic impacts of self-driving vehicles in the year 2025. Science Engineering Ethics. https://doi.org/10.1007/s11948-019-00130-2
- This article provides a forward-looking outlook concerning the development of automated vehicles (AV) between 2019 and 2025. The author extrapolates the current trajectory of AV technology and policy development to construct a vision of the likely future in 2025. The paper considers legal, social, and economic implications of AV deployment including privacy, liability, data governance, and safety. The author intends to show how policymakers’ current actions will affect the development of AV in the future.
- SAE International. (2016).* Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. https://www.sae.org/standards/content/j3016_201806/
- This document explains autonomous driving systems that perform ‘dynamic driving tasks’ and provides a full taxonomy of relevant definitions and categories of automated driving ranging from no automation (level 0) to full automation (level 5). The terms intended to help the autonomous driving industry maintain coherence and consistency when referring to driving systems.
- Smith, B. W. (2017).* How governments can promote automated driving. New Mexico Law Review, 47(1), 99-138. http://ssrn.com/abstract=2749375
- This article recognizes the common desire among governments to accelerate the development and deployment of automated driving technologies in their respective jurisdictions, and provides steps that can be taken by governments to encourage this process. The author argues that governments must do more than pass ‘autonomous driving laws’ and should instead take a nuanced approach that recognizes the various technologies, applications and applicable laws that apply to autonomous vehicles.
- Smith, B. W. (2015).* Regulation and the risk of inaction. In M. Maurer, J. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomes Fahren (pp. 593-609). Springer.
- This article considers how risk is allocated in uncertainty and who determines this, in the context of autonomous driving. The author focuses on the role that legislatures, administrative agencies, and courts play in developing relevant rules, regulation, or verdicts, and proposes eight strategies that can serve as a meta-regulation of these processes in the context of autonomous driving.
- Sparrow, R., & Howard, M. (2017). When human beings are like drunk robots: Driverless vehicles, ethics, and the future of transport. Transportation Research Part C: Emerging Technologies, 80, 206-215. https://doi.org/10.1016/j.trc.2017.04.014
- This article pushes back against the prevailing narrative that autonomous vehicles will save lives by observing that many automated systems are dependent on human supervision which produces more dangerous outcomes than anticipated. However, once vehicles become fully autonomous, the authors argue against the moral permissibility of manual driving.
- Taeihagh, A., & Lim, H. S. M. (2019). Governing autonomous vehicles: Emerging responses for safety, liability, privacy, cybersecurity, and industry risks. Transport reviews, 39(1), 103-128. https://doi.org/10.1080/01441647.2018.1494640
- This article assesses the risks of automated vehicles and available solutions for governments to address them. The authors conclude that governments have largely avoided stringent and legally-binding measures in an effort to encourage future AI development. They provide some data and analysis from the US, UK, and Germany to observe that while these countries have taken some steps toward legislation, most others have not implemented any specific strategy that acknowledges issues presented by AI.
- Uniform Law Commission. (2019).* Uniform automated operation of vehicles act. https://www.uniformlaws.org/committees/communityhome?CommunityKey=4e70cf8e-a3f4-4c55-9d27-fb3e2ab241d6
- This is a proposed legislative document that concerns the regulation and operation of autonomous vehicles. The act covers the deployment and licensing process of automated vehicles on public roads, and attempts to adapt existing US vehicle codes to accommodate for this deployment. The act also stresses the need for a legal entity to address issues of vehicle licensing, ownership, liability, and responsibility.
- United Nations Global Forum for Road Traffic Safety. (2018).* Resolution on the deployment of highly and fully automated vehicles in road traffic. https://undocs.org/pdf?symbol=en/ECE/TRANS/WP.1/2018/4/REV.3
- This is a UN resolution that is dedicated to road safety and the safe deployment of self-driving technologies on public roads. The resolution is not legally binding but intended to serve as a guide for nations dealing with the implementation of autonomous technologies. It offers recommendations to ensure safe interaction between autonomous and conventional driving technology.
- United States Department of Transportation. (2018).* Preparing for the future of transportation: Automated vehicles 3.0. https://www.transportation.gov/av/3
- This is the third iteration of a report developed by the US Department of Transportation (DOT) which is intended to highlight the DOT’s interest in promoting safe, reliable and cost-effective deployment of automated technologies into various modes of surface transportation. The report includes six principles to guide policy and five strategies for implementation based on the principles.
- Wolkenstein, A. (2018). What has the trolley dilemma ever done for us (and what will it do in the future)? On some recent debates about the ethics of self-driving cars. Ethics and Information Technology, 20(3), 163-173. https://doi.org/10.1007/s10676-018-9456-6
- This article considers how the trolley problem is often cited in literature and public debates related to autonomous vehicles by claiming to provide practical guidance on AI ethics for self-driving cars. Through an analysis of relevant sources, the author argues that although the philosophical considerations bestowed by the trolley problem may be theoretically worthwhile, the trolley problem is ultimately unhelpful in programming and passing legislation for automated driving technologies.
Chapter 36. The Case for Ethical AI in the Military (Jai Galliott and Jason Scholz)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.43
- Abaimov, S., & Martellini, M. (2020). Artificial intelligence in autonomous weapon systems. In M. Martellini & R. Trapp (Eds.), 21st Century prometheus: Managing CBRN safety and security affected by cutting-edge technologies (pp. 141-177). Springer.
- The chapter reviews the impact of AI on autonomy and explores cyber vulnerabilities in autonomous technologies. The authors highlight several critical AI use issues in autonomous weapon systems (AWS) and reveal several legal complications and consequences. The authors also provide potential crisis scenarios that forecast future challenges in AWS.
- Anderson, K., & Waxman, M. C. (2013). Law and ethics for autonomous weapon systems: Why a ban won’t work and how the laws of war can. The Stanford University Hoover Institution. http://dx.doi.org/10.2139/ssrn.2250126
- This paper argues that an outright ban on autonomous weapons systems (AWS) to address legal or ethical issues is impractical. In looking for an alternative, it observes that the inevitable but incremental evolution of AWS allows for international best practices to evolve over time. Thus, the paper proposes a gradual adaptation of codes of conduct for AWS based on existing ethical and legal principles.
- Arkin, R. (2009).* Governing lethal behavior in autonomous robots. CRC Press.
- This article argues in favor of, and presents a framework for, the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system, such that the system adheres to the Laws of War and Rules of Engagement.
- Arkin, R. C. (2010). The case for ethical autonomy in unmanned systems. Journal of Military Ethics, 9(4), 332-341.
- This article appeals to ongoing and foreseen technological advances and assessments of human abilities as forces of warfare to argue in favor of the ethical autonomy of lethal autonomous unmanned systems. In addition to their capacity for autonomy, the article argues that these systems will potentially be capable of performing more ethically on the battlefield than human soldiers.
- Awad, E., et al. (2018).* The moral machine experiment. Nature, 563(7729), 59-64.
- This article aims to address concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide these machines. The authors utilize the Moral Machine, an online experimental platform, to gather data which is analyzed to come to a recommendation as to how machine decision making should be determined.
- de Ágreda, Á. G. (2020). Ethics of autonomous weapons systems and its applicability to any AI systems. Telecommunications Policy, 44(6), https://libkey.io/10.1016/j.telpol.2020.101953?utm_source=ideas
- The article first compares the general-purpose ethical codes and military ethical codes. The author argues that two essential characteristics should be satisfied for both ethical codes: (1) the way algorithms work should be understood, and (2) humans should retain enough control over these algorithms. Moreover, the author claims that all AI developments should be carefully analyzed for their potential use in military technology.
- Enemark, C. (2013). Armed drones and the ethics of war: Military virtue in a post-heroic age. Routledge.
- This book assesses the ethical implications of using armed unmanned aerial vehicles in contemporary conflicts, by analyzing them in context of ethical principles that are intended to guard against unjust increases in the incidence and lethality of armed conflict. The book weighs evidence that indicates that the use of armed drones is to be welcomed as an ethically superior mode of warfare against the argument that continued and increased use may ultimately do more harm than good.
- Enemark, C. (2019). Drones, risk, and moral injury. Critical Military Studies, 5(2), 150–167. https://doi.org/10.1080/23337486.2017.1384979
- This article frames drone operators as moral agents and assesses the possibility, given recent evidence, that drone violence can cause “moral injury” to the operator. This moral injury is said to occur when a drone killing, deemed permissible by others, betrays the operator’s personal standard of right conduct. The article concludes by arguing that if the risk of moral injury is real, it could serve as an additional ethical basis for restraining drone violence.
- Galliott, J. (2015).* Military robots: Mapping the moral landscape. Ashgate Publishing.
- This book uses the lens of the rise of drone warfare to explore and analyze the moral, political, and social questions that have arisen in the contemporary era of warfare. Some examples of these issues are concerns of who may be legitimately targeted in warfare, the collateral effects of military weaponry, and the methods of determining and dealing with violations of the laws of war.
- Galliott, J. (2016).* Defending Australia in the digital age: Toward full spectrum defence. Defence Studies, 16(2), 157-175.
- This paper argues that Australia’s defense strategy is incomplete or at least inefficient. The author argues this is the consequence of a crippling geographically focused strategic dichotomy, caused by the armed forces historically having been structured to venture afar as a small part of a large coalition force or, alternatively, to combat small regional threats across land, sea, and air.
- Galliott, J. (2017).* The limits of robotic solutions to human challenges in the land domain. Defence Studies, 17(4), 327-345.
- This article explores the limits of robotic solutions to military problems, encompassing technical limitations and redundancy issues that point to the need to introduce a framework compatible with the adoption of robotics while preserving existing levels of human staffing.
- Garcia, D. (2018). Lethal artificial intelligence and change: The future of international peace and security. International Studies Review, 20(2), 334–341.
- This paper argues that the use of artificial intelligence in warfare can destabilize the international system. To cope with such changes, the author argues, states should adopt preventive governance frameworks based upon the precautionary principle of international law. To bolster this suggestion, the author examines twenty-two existing treaties established to control weapons systems that were deemed destabilizing and finds that all of them either prevented further militarization or made weaponization unlawful.
- Horowitz, M. C. (2016). Public opinion and the politics of the killer robots debate. Research & Politics, 3(1). https://doi.org/10.1177/2053168015627183
- This article uses survey data to shed light on American public opinion concerning autonomous weapons systems (AWS). Based on the collected data, the article argues that public support for AWS is highly contextual, in contradiction with existing research that suggests widespread opposition. For instance, the data shows that fear of other countries (or non-state actors) developing AWS increases American public support for their own government’s use of the technology significantly.
- Leben, D. (2018).* Ethics for robots: How to design a moral algorithm. Routledge.
- The author describes and defends a framework for designing and evaluating ethical algorithms that will govern autonomous machines. Furthermore, the book argues that these algorithms should be evaluated by how effectively they accomplish the problem of cooperation among self-interested organisms, and therefore, must be catered to the artificial subjects at hand, rather than being created based to simulate evolved psychological systems.
- Lewis, D. A., et al. (2016). War-algorithm accountability. Harvard Law School Program on International Law and Armed Conflict. https://pilac.law.harvard.edu/waa
- In this briefing report, the authors introduce a new concept, “war algorithms.” War algorithms are defined as any algorithm expressed in computer code capable of operating in the context of armed conflict. The authors then argue that–contrary to the more specific concept of autonomous weapons systems (AWS)–war algorithms may fit within the existing regulatory system established by international law.
- Lin, P., et al. (Eds.). (2017).* Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.
- This book presents a wide and updated range of contemporary ethical issues facing the field of robotics, utilizing new use-cases for robots and their challenges to build a global representation of the contemporary questions in the field.
- Lin, P., et al. (2008).* Autonomous military robotics: Risk, ethics, and design. California Polytechnic State University San Luis Obispo.
- This paper presents and explores the issues that need to be considered in responsibly introducing advanced technologies into the battlefield and, eventually, into society. It argues for the presumptive case for the use of autonomous military robotics, but then goes on to consider various issues that come with this decision, for example: the need to address risk and ethics in the field, ethical and social issues, both near- and far-term, and recommendations for future work.
- Maas, M. M. (2019). How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. Contemporary Security Policy, 40(3), 285–311. https://doi.org/10.1080/13523260.2019.1576464
- The author draws on lessons learned from arms control regimes in nuclear weapons to suggest how similar techniques may work for military artificial intelligence (AI). The author uses these parallels to argue that an “AI arms race” is not inevitable and can be managed directly through engagement with domestic political coalitions or indirectly by shaping norms top-down (through international regimes) or bottom-up (through epistemic communities).
- McMahan, J. (2013). Killing by remote control: The ethics of an unmanned military. Oxford University Press.
- This text explores the ethical permissibility of the use of unmanned mediated mechanisms in warfare. It includes discussions of broader issues such as the just war tradition and the ethics of war, as well as more specific issues surrounding the use of drones, such as what are known as “targeted killing” by the United States.
- Nilsson, N. J. (2009). The quest for artificial intelligence. Cambridge University Press.
- This book traces the history and development of AI, and explains its contemporary uses and issues surrounding its implementation.
- Omotoyinbo, F. R. (2022). Smart soldiers: Towards a more ethical warfare. AI & Society, 1-7.
- This article argues against completely replacing human soldiers with programmable smart soldiers. It discusses how human emotions can contribute to more ethical warfare and removing them can raise severe ethical implications. The article advocates instead for a more inclusive integration of artificial intelligence that benefits from its accuracy and robustness in environments that induce human toxicity.
- Rosert, E., & Sauer, F. (2021). How (not) to stop the killer robots: A comparative analysis of humanitarian disarmament campaign strategies. Contemporary Security Policy, 42(1), 4-29.
- The article argues that the strategy of the Campaign to Stop Killer Robots to obtain a legally binding instrument to regulate Lethal Autonomous Weapons Systems within the United Nations Convention on Certain Conventional Weapons framework will likely not be effective. The author points out that these strategies were modelled after the previous humanitarian disarmament successes and are not suited to the specific issue in the autonomous weapons system. The authors then suggest that this strategy needs to be modified with respect to institutional choices, substance, and regulatory design.
- Sauer, F., & Schörnig, N. (2012). Killer drones: The ‘silver bullet’ of democratic warfare? Security Dialogue, 43(4), 363-380.
- This article discusses the distinct appeal of drones to democratic nations as cheap, casualty-avoidant, and precise weapons of war. This appeal is explained by the need to retain public support through respect for human life and rule of law. The article cautions that, by making war more acceptable, these apparent benefits of killer drones may render democratic nations more war prone.
- Scholz, J., & Galliott, J. (2018).* Artificial intelligence in weapons: The moral imperative for minimally-just autonomy. US Air Force Journal of Indo-Pacific Affairs, 1(2), 57-67.
- This article argues that to allow military power to be lawful and morally just, future autonomous artificial intelligence (AI) systems must not commit humanitarian errors. Therefore, the authors propose a preventative form of minimally-just autonomy using artificial intelligence (MinAI). This would avert attacks on protected symbols, sites, and recognize signals of surrender.
- Sharkey, A. (2019). Autonomous weapons systems, killer robots and human dignity. Ethics and Information Technology, 21(2), 75-87.
- The article examines the relationship between autonomous weapon systems (AWS) and human dignity. The paper analyzes three types of common objections to AWS: (1) arguments based on technology and the ability of AWS to serve international humanitarian law, (2) arguments based on the need for human control, and (3) the argument based on its increased likelihood to initiate a war. The authors point out that there are many ambiguities in these arguments, and it is essential not exclusively to rely on arguments based on human dignity.
- Sparrow, R. (2009).* Building a better WarBot: Ethical issues in the design of unmanned systems for military applications. Science and Engineering Ethics, 15(2), 169-187.
- This article explores how designers of unmanned military systems must consider ethical, as well as operational, requirements and limits when developing such systems. The author presents two groups of such ethical issues, Building Safe Systems and Designing for the Law of Armed Conflict.
- Sparrow, R. (2016). Robots and respect: Assessing the case against autonomous weapon systems. Ethics & international affairs, 30(1), 93-116.
- This article provides a balanced assessment of ethical arguments against autonomous weapons systems (AWS). It explains the weakness of these arguments when viewing the use of AWS by human combatants as only a means to kill. Still, the article concludes that arguments against AWS based on their disrespect towards humanity can be sufficiently strong, simply by convention of what is required for respect.
- Sullins, J. P. (2006). When is a robot a moral agent. International Review of Information Ethics, 6(12), 23-30.
- This paper argues that in certain circumstances robots can be seen as real moral agents, under specific conditions. These three conditions are as follows: (1), the robot must be significantly autonomous from any programmers or operators of the machine, (2), the robot’s behavior must have an ‘intention’, and (3), the robot must behave in a way that shows an understanding of responsibility to some other moral agent.
- Umbrello, S., et al. (2020). The future of war: Could lethal autonomous weapons make conflict more ethical? AI & Society, 35(1), 273–282. https://doi.org/10.1007/s00146-019-00879-x
- This paper weighs the arguments for and against the use of Lethal Autonomous Weapons (LAWs) through the lens of achieving more ethical warfare. The authors contend that the relatively low cost, the potential for “moral programming,” and the ability to remove human combatants from the line of fire constitute strong reasons for pursuing LAWs. However, the authors note several caveats; LAWs must have targeting and judgment systems equal to or superior to humans and must embody moral programs that all parties agree upon.
- Umbrello, S., & Wood, N. G. (2021). Autonomous weapons systems and the contextual nature of hors de combat status. Information, 12(5), 216.
- This paper argues that deciding when individuals are legally hors de combat (out of combat) depends on their ability to harm adversaries. It explains with multiple thought experiments that this is highly context-dependent and cannot be hard-coded into autonomous weapons systems (AWS). The paper concludes that military commanders must retain some meaningful human control over AWS behavior.
- Verdiesen, I., et al. (2021). Accountability and control over autonomous weapon systems: A framework for comprehensive human oversight. Minds and Machines, 31(1), 137-163.
- The article defines “meaningful human control” and describes the relationship between accountability, responsibility, and control. The author shows that these concepts are distinct but also related. The author then proposes a “Framework for Comprehensive Human Oversight” based on an engineering, socio-technical, and governance perspective on control and argues that the proposed framework ensures controllability and accountability for the behavior of AWS.
- Young, K. L., & Carpenter, C. (2018). Does science fiction affect political fact? Yes and no: A survey experiment on “Killer Robots.” International Studies Quarterly, 62(3), 562–576. https://doi.org/10.1093/isq/sqy028
- This paper explores the effect of popular culture on American attitudes toward autonomous weapons systems (AWS). The authors find that consumption of films with frightening depictions of armed artificial intelligence (AI) is associated with greater opposition to autonomous weapons. Furthermore, this “sci-fi literacy” effect is increased if survey respondents are first “primed” about popular culture–an effect the authors call the “sci-fi geek effect.”
Chapter 37. The Ethics of AI in Biomedical Research, Patient Care, and Public Health (Alessandro Blasimme and Effy Vayena)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.45
Biomedical Research
- Blasimme, A., & Vayena, E. (2016). “Tailored-to-You”: Public engagement and the political legitimation of precision medicine. Perspectives in Biology and Medicine, 59(2), 172-188.
- This article outlines a detailed history of personalized medicine in its sociotechnical and legislative context in the United States, with a particular focus on the 2015 federal Precision Medicine Initiative. The authors emphasize the interplay between scientific and social factors, especially the importance of a “participatory ethos” and public engagement in building political support for innovative biomedical paradigms.
- Buruk, B., et al. (2020). A critical perspective on guidelines for responsible and trustworthy artificial intelligence. Medicine, Health Care, and Philosophy, 23, 387–399. https://doi.org/10.1007/s11019-020-09948-1
- This paper analyzes three sets of ethical guidelines for artificial intelligence and deep learning: The Montréal Declaration for Responsible Development of Artificial Intelligence, the Ethics Guidelines for Trustworthy AI, and the Asilomar Artificial Intelligence Principles. The authors then address whether those guidelines are sufficient given the ethical intricacies stemming from the introduction of deep learning in medicine. The authors argue that the guidelines do not make suggestions for ethical dilemmas occurring in everyday life.
- Ferryman, K., & Pitcan, M. (2018). Fairness in precision medicine. Data & Society. https://kennisopenbaarbestuur.nl/media/257243/datasociety_fairness_in_precision_medicine_feb2018.pdf
- This is a qualitative empirical study on the views of stakeholders engaged in precision medicine on its risks of biases and promises for the future. The study concludes that these stakeholders are both hopeful for the promises held by precision medicine and yet concerned about the potential for bias.
- Geneviève, L. D., et al. (2020). Structural racism in precision medicine: Leaving no one behind. BMC Medical Ethics, 21(1), 1-13.
- This paper examines precision medicine through the lenses of structural racism and equity. The authors describe how systemic racism can impact the behavior of precision medicine through impacts on the initial data generation processes, the data analytical processes, and the final implementation of models. They warn against the possibility for machine learning technologies to exacerbate these structural problems and offer a range of potential solutions at each step in the precision medicine process.
- Hollister, B., & Bonham, V. L. (2018). Should electronic health record-derived social and behavioral data be used in precision medicine research? AMA Journal of Ethics, 20(9), 873-880.
- This article explores the ethical and practical issues surrounding the inclusion of social and behavioral information from electronic health records in precision medicine research. The authors argue that this data is often inconsistently collected and of low quality and that the sensitive nature of this data presents a significant risk of patient harm if it is misused.
- Ienca, M., et al. (2018).* Considerations for ethics review of big data health research: A scoping review. PloS one, 13(10). https://doi.org/10.1371/journal.pone.0204937
- The methodological novelty and computational complexity of big data health research raises novel challenges for ethics review. This paper reviews the literature to identify and map the major challenges of health-related big data for Ethics Review Committees. The findings suggest that while big data trends in biomedicine hold the potential for advancing clinical research, improving prevention, and optimizing healthcare delivery, several epistemic, scientific, and normative challenges need careful consideration.
- Landry, L. G., et al. (2018).* Lack of diversity in genomic databases is a barrier to translating precision medicine research into practice. Health Affairs, 37(5), 780-785.
- Precision medicine often uses molecular biomarkers to assess patients’ prognosis and therapeutic response more precisely. This paper examines which populations were included in studies using two public genomic databases, and found significantly fewer studies of African, Latin American, and Asian ancestral populations compared to European populations. While the number of genomic research studies that include non-European populations is improving, the overall numbers are still low, representing potential for inequities in precision medicine applications.
- Park, S. H., et al. (2019). Ethical challenges regarding artificial intelligence in medicine from the perspective of scientific editing and peer review. Science Editing. https://doi.org/10.6087/kcse.164
- The authors highlight several aspects of research studies on artificial intelligence (AI) in medicine that require additional transparency and explain why additional transparency is needed. Transparency regarding training data, test data and results, interpretation of study results, and the sharing of algorithms and data are major areas for guaranteeing ethical standards in AI research.
- Vayena, E., & Blasimme, A. (2017). Biomedical big data: New models of control over access, use and governance. Journal of Bioethical Inquiry, 14(4), 501-513.
- This article challenges the notion that the collection of biomedical big data necessitates a loss of individual control. Rather, the authors propose three approaches to empowering the individual: (1) data portability rights, (2) new mechanisms of informed consent, and (3) new schemes of participatory governance.
- Vayena, E., & Blasimme, A. (2018).* Health research with big data: Time for systemic oversight. The Journal of Law, Medicine & Ethics, 46(1), 119-129.
- The authors propose a new paradigm for the ethical oversight of biomedical research in alignment with the ubiquity of big data as opposed to suggesting updates and fixes for existing models. This paradigm, systemic oversight, is based on six core features: (1) adaptivity, (2) flexibility, (3) monitoring, (4) responsiveness, (5) reflexivity, and (6) inclusiveness.
- Vollmer, S., et al. (2020). Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ, 368. https://doi.org/10.1136/bmj.l6927
- Structured around a series of twenty “critical questions” to be asked during the development process, this article explores issues of transparency, replicability, ethics, and effectiveness in the implementation of AI in clinical medicine. The authors emphasize the complex sociotechnical context into which these algorithms are implemented and discuss necessary requirements for AI to be rigorously considered effective in clinical practice.
- Wiens, J., et al. (2019). Do no harm: A roadmap for responsible machine learning for health care. Nature Medicine, 25(9), 1337–1340. https://doi.org/10.1038/s41591-019-0548-6
- This article engages with the issue of responsible machine learning in healthcare from the perspective of interdisciplinary model development and deployment teams. On the development side, the authors outline concerns related to selecting the right problems, developing clinically useful solutions, considering the proximal and distal ethical implications of such solutions, and evaluating the resulting models in rigorous and consistent ways. On the implementation side, they outline issues related to deployment, marketing, and results-reporting for these models.
Clinical Medicine
- Arnold, M. H. (2021). Teasing out artificial intelligence in medicine: An ethical critique of artificial intelligence and machine learning in medicine. Bioethical Inquiry, 18, 121–139. https://dx.doi.org/10.1007/s11673-020-10080-1
- This paper explores the ethical underpinnings of the introduction of artificial intelligence in medicine. The author argues that the use of artificial intelligence in medicine will necessarily impact the role of physicians. Because of this, health practitioners should start engaging with the tensions between artificial intelligence and medical ethical principles (beneficence, autonomy, and justice), in order for them to understand the limits as well as the promises of artificial intelligence for their practice.
- Bjerring, J. C., & Busch, J. (2020). Artificial intelligence and patient-centered decision-making. Philosophy & Technology, 34, 349-371. https://dx.doi.org/10.1007/s13347-019-00391-6
- This paper argues that the opacity of some artificial intelligence algorithms is not conducive with informed consent in medical decision-making. In particular, the authors claim that this type of “black-box medicine” is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient.
- Blasimme, A., & Vayena, E. (2016). Becoming partners, retaining autonomy: Ethical considerations on the development of precision medicine. BMC Medical Ethics, 17(1), 67.
- This article explores the challenge of engaging patients and their perspectives in the precision medicine clinical research process. The authors explore the normative construction of research participation and partnership, as well as tensions between individual and collective interests. They advocate for the concept of “respect for autonomous agents” (as opposed to autonomous action or choice) as a potential mechanism for resolving these ethical tensions.
- Blasimme, A., et al. (2019). Big data, precision medicine and private insurance: A delicate balancing act. Big Data & Society, 6(1). https://doi.org/10.1177/2053951719830111
- Using national precision medicine initiatives as a case study, this article explores the tension between private insurers leveraging repositories of genetic and phenotypic data for economic gain and the utility of these databases as a public scientific resource. Although the authors admit that information asymmetry between insurance companies and their policy-holders still leads to risks in reduced research participation, adverse selection, and discrimination, they argue that a governance model underpinned by trustworthiness, openness, and evidence can balance these competing interests.
- Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group. (2019). Canadian Association of Radiologists white paper on ethical and legal issues related to artificial intelligence in radiology. Canadian Association of Radiologists’ Journal, 70(2), 107-118.
- Radiology is positioned to lead in the development and implementation of AI algorithms. This white paper provides a framework for studying the legal and ethical issues related to AI in medical imaging, including patient data (privacy, confidentiality, ownership, and sharing), algorithms (levels of autonomy, liability, and jurisprudence), practice (best practices and current legal framework), and opportunities in AI from the perspective of a universal health care system.
- Canales, C., et al. (2020). Science without conscience is but the ruin of the soul: The ethics of Big Data and artificial intelligence in perioperative medicine. Anesthesia & Analgesia, 130(5), 1234-1243. https://doi.org/10.1213/ANE.0000000000004728
- This article discusses the advent of AI-driven technologies within the fields of anesthesiology and perioperative care. The authors weigh the potential benefits these technologies might provide for patient safety and outcomes against an itinerary of corresponding ethical, moral, and technical implications. These implications range from pragmatic concerns of data stewardship to a more philosophical advocacy towards preserving humanism as a core principle of practice within these fields.
- Challen, R., et al. (2019).* Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231-237.
- This paper outlines a set of short-term and medium-term clinical safety issues raised by machine learning enabled decision-making software. This framework is supported by a set of quality control questions that are designed to help clinical safety professionals and those involved in developing ML systems to identify areas of concern. The authors encourage rigorous testing of new ML systems through randomized control testing, and by comparing to existing practices.
- Char, D. S., et al. (2018). Implementing machine learning in health care—Addressing ethical challenges. The New England Journal of Medicine, 378(11), 981-983.
- This article discusses ethical challenges in the clinical implementation of machine learning systems. In addition to more “straightforward” ethical challenges such as bias and discrimination, the authors discuss “less obvious” risks, such as algorithms being incentivized toward high-profit care, providing excessive legitimacy to medically uncertain decisions, or undermining the clinical experience of physicians. They outline a call for reshaping both medical education and codes of medical ethics in light of these concerns.
- Chen, I. Y., et al. (2020). Treating health disparities with artificial intelligence. Nature Medicine, 26(1), 16-17.
- This article argues that while substantial concerns exist about algorithms amplifying bias in medicine, algorithms may also play an important role in identifying and correcting disparities. The authors advocate for an understanding of the ethics of AI in healthcare that goes beyond the question of algorithmic fairness and toward better consideration of the systemic and socioeconomic context of health disparity.
- Chin-Yee, B., & Upshur, R. (2018). Clinical judgment in the era of big data and predictive analytics. Journal of Evaluation in Clinical Practice, 24(3): 638-645.
- The authors review different approaches to clinical judgement articulated in the research literature. The authors contrast data‐driven mathematical approaches with virtue‐based approaches to clinical reasoning and attend to the implications of different clinical epistemologies for big data and machine learning in clinical medicine. They suggest that a major weakness of evidence‐based epistemologies is that they conflate data and evidence, and prioritize specific forms of evidence, rather than reflecting on the question at hand. The authors advocate for a pluralistic and integrative approach and suggest that virtue‐based clinical reasoning will remain indispensable in the era of health-related big data and predictive analytics, but on their own remain insufficient.
- Chin-Yee, B., & Upshur, R. (2019). Three problems with big data and artificial intelligence in medicine. Perspectives in Biology and Medicine, 62(2), 237-256.
- This paper engages with three important philosophical challenges facing “big data” and artificial intelligence in medicine. The authors outline an epistemological-ontological challenge related to the theory laden-ness of big data and measurement, an epistemological-logical challenge related to inherent limits of algorithms, and a phenomenological challenge related to irreducibility of human experience to quantitative data. They argue for the importance of the artificial intelligence in medicine movement engaging with its philosophical foundations.
- de Miguel Beriain, I. (2020). Should we have a right to refuse diagnostics and treatment planning by artificial intelligence? Medical Health Care and Philosophy, 23, 247–252. https://dx.doi.org/10.1007/s11019-020-09939-2
- This paper is a reply to Ploug & Holm (2020). The authors argue that patients should have the right to refuse diagnostics and treatments using artificial intelligence, but for different reasons than those supported by Ploug & Holm (2020). They present the following arguments: first, the right to refuse such treatments and diagnostics is justified by virtue of social pluralism and individual autonomy. Second, this right should be limited under three circumstances: (1) where a physician would bring harm to their patient by providing the right to refuse, (2) where it is too expensive to give the right to refuse, (3) where the application of this right has harmful consequences for other patients.
- Di Nucci, E. (2019). Should we be afraid of medical AI? Journal of Medical Ethics, 45(8), 556-558
- This paper argues against ideas that AI represents a threat to patient autonomy. The author states these ideas often conflate machine learning with AI, miss machine learning’s potential for personalized medicine through big data, and fail to distinguish between evidence-based advice and decision-making within healthcare. Which tasks machine learning performs within healthcare is a crucial question, but care must be taken in distinguishing between the different systems and different delegated tasks.
- Evans, E. L., & Whicher, D. (2018). What should oversight of clinical decision support systems look like? AMA Journal of Ethics, 20(9), 857-863.
- This article engages with the use of clinical decision support systems in medicine, arguing that such systems should be subject to ethical and regulatory oversight above and beyond that of normal clinical practice. The authors outline a framework for the development and use of these systems with an emphasis on articulating proper conditions for use, including processes for monitoring data quality and algorithm performance, and protecting patient data.
- Ferretti, A., et al. (2018). Machine learning in medicine: Opening the new data protection black box. European Data Protection Law Review, 4(3), 320-332. https://doi.org/10.21552/edpl/2018/3/10
- Certain approaches to artificial intelligence, notably deep learning, have drawn criticisms due to their relative inscrutability to human understanding (the “black box” metaphor). This article examines how the black box opacity of machine learning systems in medicine can be categorized in three forms: (1) lack of disclosure on if automated decision-making is taking place, (2) epistemic opacity on how an AI system arrives at a specific outcome, and (3) explanatory opacity on why an AI system provides a specific outcome. Moreover, the authors take a solution-driven approach through discussing how each of the types of opacity identified can be addressed through the General Data Protection Regulation.
- Ficuciello, F., et al. (2019). Autonomy in surgical robots and its meaningful human control. Paladyn, Journal of Behavioral Robotics, 10(1), 30-43.
- Focusing on the lens of “Meaningful Human Control” (a term extended from autonomous weapons literature), this paper engages with ethical issues arising from increasing levels of autonomy in surgical robots. The authors review the potential for robotic assistance in minimally invasive surgery and microsurgery and discuss a theoretical framework for levels of surgical robot autonomy based around several levels of “Meaningful Human Control”, each with different burdens of human responsibility and oversight.
- Fiske, A., et al. (2019). Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research, 21(5). https://doi.org/10.2196/13216
- This paper assesses the ethical and social implications of translating AI applications into mental health care across the fields of Psychiatry, Psychology, and Psychotherapy. After a literature search, the authors find that AI is a promising approach across the field of mental health; however, further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental health care.
- Gerke, S., et al. (2020). Ethical and legal aspects of ambient intelligence in hospitals. JAMA, 323(7), 601-602.
- Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces (e.g. video capture to monitor for hand hygiene, patient movements, etc.) and of the use of that information to assist healthcare workers in delivering quality care. This commentary discusses potential issues these practices raise around patient privacy and reidentification risk, consent, and liability.
- Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46, 205–211. http://dx.doi.org/10.1136/medethics-2019-105586
- This article argues that the use of machine learning algorithms in healthcare settings comes with trade-offs at the epistemic and normative level. Drawing on social epistemology and the literature on moral responsibility, the authors argue that the opacity of algorithms notably challenges the epistemic authority of health practitioners and could lead to vices such as paternalism and gullibility.
- He, J., et al. (2019). The practical implementation of artificial intelligence technologies in medicine. Nature Medicine, 25(1), 30-36.
- This article explores practical issues that exist regarding the implementation of AI in clinical workflows, including data sharing difficulties, privacy issues, transparency problems, and concerns for patient safety. The authors argue that these practical issues are global in scope, and engage in a comprehensive comparative discussion of the medical AI regulatory environments in the United States, Europe, and China.
- Ho, C., et al. (2019). Governance of automated image analysis and artificial intelligence analytics in healthcare. Clinical Radiology, 74(5), 329-337.
- This paper discusses the nature of AI governance in biomedicine along with its limitations. The authors argue that radiologists must assume a more active role in propelling medicine into the digital age, including inquiring into the clinical and social value of AI, alleviating deficiencies in their technical knowledge to facilitate ethical evaluation, supporting the recognition and removal of biases, engaging the “black box” obstacle, and brokering a new social contract on informational use and security.
- Jotterand, F., & Bosco, C. (2022). Artificial intelligence in medicine: A sword of Damocles? Journal of Medical Systems, 46(9). https://doi.org/10.1007/s10916-021-01796-7
- This paper begins by posing the fundamental quandary of whether artificial intelligence will de-humanize or re-humanize the field of clinical medicine. The authors advocate for an ethical framework that preserves the humanistic dimensions of medical practice throughout adoption of these novel technologies. This advocacy is centered around three major areas of concern: 1) the anthropological implications of AI in the clinical context; 2) the frameworks used to address ethical issues within medicine; and 3) the impact of AI on clinical practice and decision making.
- Karches, K. E. (2018). Against the iDoctor: Why artificial intelligence should not replace physician judgment. Theoretical Medicine and Bioethics, 39, 91–110. https://dx.doi.org/10.1007/s11017-018-9442-3
- This paper argues that artificial intelligence is not suited for clinical practice. Drawing on the works of Martin Heidegger and Hubert Dreyfus, the author argues that medical algorithms cannot be adapted to individual patients’ needs and thus cannot produce efficient clinical care.
- Kiener, M. (2021). Artificial intelligence in medicine and the disclosure of risks. AI & Society, 36, 705–713. https://dx.doi.org/10.1007/s00146-020-01085-w
- This paper argues that the risks of employing opaque algorithms in medicine should be disclosed to patients by their health practitioners. The most notable risks are those created by cyberattacks, systematic bias within the data used to build the algorithm, and a potential incongruence between the assumptions made by an algorithm and an individual patient’s background situation. The author argues that under certain conditions, these risks must be disclosed in order for the physician to acquire informed consent and meet their duty to warn patients about potentially harmful consequences.
- Lamanna, C., & Byrne, L. (2018). Should artificial intelligence augment medical decision-making? The case for an autonomy algorithm. AMA Journal of Ethics, 20(9), 902-910.
- The authors put forward the concept of an “autonomy algorithm”, which might be used to integrate data from social media and electronic health records in order to estimate the likelihood that an incapable patient would have consented to a particular course of treatment. They explore ethical and practical issues in the construction and implementation of such an algorithm, and ultimately argue that it would likely be more reliable and less liable to bias than existing substitute decision-making methods.
- London, A.J. (2019). Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1), 15-21. http://dx.doi.org/10.1002/hast.973
- The article questions the position that opaque or unexplainable machine learning systems should be avoided in medicine. The author argues that imposing an explainability requirement on algorithms is inappropriate in the medical context. In the context of medicine, empirical findings are often relied upon without necessarily accepting the theories that aim to explain the phenomena. For instance, we can evaluate the efficacy of a medical intervention without being able to understand or explain the mechanisms behind why such an intervention works. The focus in medicine should therefore be on the accuracy and reliability of machine learning systems instead of its explainability.
- Luxton, D. D. (2014). Recommendations for the ethical use and design of artificial intelligent care providers. Artificial Intelligence in Medicine, 62(1), 1-10.
- This paper identifies and reviews ethical issues associated with artificial intelligent care providers in mental health care and other helping professions. The author finds that existing ethics codes and practice guidelines do not presently consider the current or the future use of interactive artificial intelligent agents to assist and to potentially replace mental health care professionals. The author makes specific recommendations for the development of ethical codes, guidelines, and the design of these systems.
- Martinez-Martin, N., et al. (2018). Is it ethical to use prognostic estimates from machine learning to treat psychosis? AMA Journal of Ethics, 20(9), 804-811.
- Building on the case study of a recent machine learning model for predicting prognosis for patients with psychosis, this article engages with the ethics of AI in psychiatry specifically, as well as the ethics of implementing innovation in clinical medicine more broadly. In particular, the authors examine the burdens that are placed upon physicians in understanding and engaging with novel technologies, and the challenges with communicating risks sufficiently to enable informed consent.
- McDougall, R. J. (2019). Computer knows best? The need for value-flexibility in medical AI. Journal of Medical Ethics, 45(3), 156–160. https://doi.org/10.1136/medethics-2018-105118
- Focusing on the case study of IBM’s “Watson for Oncology”, this paper engages with issues related to shared decision-making in medical AI. The author argues that the use of fixed and covert value judgments underlying AI systems risks excluding patient perspectives and increasing medical paternalism. Conversely, the author argues that AI systems can be “value-flexible” if developed to explicitly incorporate patient values and perspectives, and in doing so may remedy existing challenges in shared decision-making.
- Morley, J., et al. (2020). The ethics of AI in health care: A mapping review. Social Science & Medicine, 113172.
- This article presents the results of a review of the literature focused on identifying categories of ethical risks presented by AI for health care, and issues policymakers, regulators, and developers can be mindful of. The authors suggest that ethical risks can be a) epistemic (misguided, inconclusive or inscrutable evidence), b) normative (unfair outcomes and transformative effects); or c) related to traceability. They suggest that each of these risks applies to different levels of ‘abstraction’: individual, interpersonal, group, institutional, and societal or sectoral. They suggest that urgent attention to these issues is required so that loss of public trust does not result in a new ‘AI winter’.
- Nebeker, C., et al. (2019). Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Medicine, 17(1), 137. https://doi.org/10.1186/s12916-019-1377-7
- Placing a particular focus on direct-to-consumer digital therapeutics, this article examines the current ethical and regulatory environment for digital health. The authors describe the current situation as a “wild west” with little regulation and identify gaps and opportunities in terms of building interdisciplinary collaboration, improving digital literacy, and developing ethical standards. They conclude by summarizing several initiatives already underway to address these gaps.
- Neri, E., et al. (2020). Artificial intelligence: Who is responsible for the diagnosis? La radiologia medica, 125, 517-521. https://doi.org/10.1007/s11547-020-01135-9
- As articulated by the authors themselves, this paper strives to answer a single question: “Who or what is responsible for the benefits and harms of using artificial intelligence in radiology?” The authors address this question of legal and professional responsibility by assessing seven basic risk statements, focused on a radiologist’s use of and training with these often autonomous tools. They advocate for legislation that limits this autonomy, defines the responsibilities of individual actors, and permits practitioner intervention.
- Nundy, S., et al. (2019). Promoting trust between patients and physicians in the era of artificial intelligence. Jama, 322(6), 497-498.
- This paper discusses how AI will affect trust between physicians and patients. The three components of trust are defined as competency, motive, and transparency, and the authors explore whether AI enabled health applications may impact each of these domains. They conclude that by reaffirming the foundational importance of trust to health outcomes and engaging in deliberate system transformation, the benefits of AI can be realized while strengthening patient-physician relationships.
- Obermeyer, Z., et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
- This paper engages in a quantitative analysis and discussion of racial bias in a commercial algorithm for stratifying the risk of patients with chronic disease. The authors quantitatively uncover that the algorithm unfairly classifies black patients as requiring less care than white patients of equivalent acuity, and explore further to determine that this disparity arises from using cost of care as a surrogate for health needs, and failing to consider structural disparity. They offer discussion of measures that can be taken to avoid similar problems.
- Ostherr, K. (2020). Artificial intelligence and medical humanities. Journal of Medical Humanities. https://dx.doi.org/10.1007/s10912-020-09636-4
- This paper gives an overview of the different issues that have been voiced regarding artificial intelligence in medicine in relation to medical humanities. The author focuses on a dozen issues including the definition of “medical” and “health”, the social determinants of health, narrative medicine, the place of technology within medical care, the question of data privacy and trust, flaws datasets and bias, racism, and the rhetoric of humanism.
- O’Sullivan, S., et al. (2019). Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. The International Journal of Medical Robotics and Computer Assisted Surgery, 15(1). https://doi.org/10.1002/rcs.1968
- This paper discusses autonomous robotic surgery with a particular focus on ethics, regulation and legal aspects (such as civil law, international law, tort law, liability, medical malpractice, privacy and product/device legislation, among other aspects). The authors explore responsibility for AI and autonomous surgical robots using the categories accountability, liability, and culpability, finding culpability as being the category with the least legal clarity.
- Ploug, T., & Holm, S. (2020). The right to refuse diagnostics and treatment planning by artificial intelligence. Medicine, Health Care, and Philosophy, 23, 107–114. https://dx.doi.org/10.1007/s11019-019-09912-8
- This paper argues that patients should have the right to refuse artificial intelligence medical treatments and diagnostics. The authors present three arguments to defend this thesis: (1) physicians should respect patients’ personal interests, (2) the opacity of algorithms and their potential for bias justify an option to opt out, (3) patients may have legitimate concerns about the social impact of using artificial intelligence in medicine.
- Powles J, & Hodson, H. (2017). Google DeepMind and healthcare in an age of algorithms. Health and Technology, 7(4), 351-367.
- This article examines the controversial partnership between Alphabet’s DeepMind and the Royal Free London NHS Foundation Trust, examining how the architecture of the deal led to the transfer of identifiable patient records without explicit consent for the purpose of developing a clinical alert app for kidney injury. The authors suggest that existing institutional and regulatory responses are insufficient to properly attend to the challenges presented by data politics in the context of the rise of algorithmic health care, and outline lessons from the case that engage with data protection and medical information governance, transparency, data value, and market power.
- Price, W. (2015). Black-box medicine. Harvard Journal of Law & Technology, 28(2), 419-468.
- Written from a primarily legal and regulatory perspective, this article engages with the issue of “black box” technologies in precision medicine that are unable to provide a satisfactory explanation of the decisions that are outputted. The author discusses contemporary “Big Data” technology in medicine from practical and theoretical perspectives. The author outlines several hurdles to development of this technology and a range of policy challenges including issues of incentives, privacy, regulation, and commercialization.
- Price, W. N., et al. (2019). Potential liability for physicians using artificial intelligence. JAMA, 322(18), 1765-1766.
- As AI applications enter clinical practice, physicians must grapple with issues of liability when determining how and when to follow (or not follow) the recommendations of these applications. In this article, legal scholars draw upon principles of tort law to discuss when a physician could be held liable for malpractice. The core argument of this paper, the need to analyze whether an AI recommendation is accurate and follows standard-of-care, has been synthesized by the authors in a tabular format.
- Reddy, S., et al. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association, 27(3), 491-497.
- Concern has been expressed about the ethical and regulatory aspects of the application of AI in healthcare. While there has been extensive discussion about the ethics of AI, there has been little dialogue as to how to practically address these concerns. This article proposes a governance model to address the ethical and regulatory issues that arise out of the application of AI in healthcare.
- Ryan, M. (2019). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26, 2749-2767. https://doi.org/10.1007/s11948-020-00228-y
- This paper proposes that one of the primary difficulties in the assessment of artificial intelligence is the tendency to anthropomorphize. The authors contest this tendency alongside the position of the European Commission’s High-level Expert Group on AI (HLEG) that a human-focused conception of trust should form the basis for human-machine relationships. Instead, they argue that an absence of affective state or personal responsibility within AI dispermits the cultivation of genuine trust – and, further, that narratives of trust erroneously divert fiduciary responsibility from developer to system.
- Schiff, D., & Borenstein, J. (2019). How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA Journal of Ethics, 21(2), 138-145.
- This article uses a hypothetical patient scenario to illustrate the difficulties faced when articulating the use of AI in patient care. The authors focus on: (1) informed consent, (2) patient perceptions of AI, and (3) liability when responsibility is distributed among “many hands”. For readers new to the area of medical decision-making, the case-based approach the authors have taken will be an engaging introduction to the most common pedagogy of medical education.
- Smallman, M. (2019).* Policies designed for drugs won’t work for AI. Nature, 567(7746), 7. https://doi.org/10.1038/d41586-019-00737-2
- This paper comments on the 2019 code of conduct for artificial-intelligence systems in health care by the UK government. The principles, laid out by the Department of Health and Social Care, aim to protect patient data and ensure safe data-driven technologies. The author argues however that the code fails to appreciate the potential to introduce and worsen inequities, and states the importance of developing a framework that considers and anticipates the social consequences of AI.
- Tene, O., & Polonetsky, J. (2011).* Privacy in the age of Big Data: A time for big decisions. Stanford Law Review Online, 64, 63-69.
- Big Data creates enormous value for the global economy, driving innovation, productivity, efficiency, and growth. This paper discusses privacy concerns related to big data applications, and suggests that in order to balance beneficial uses of data and the protection of individual privacy, policymakers must address some of the most fundamental concepts of privacy law, including the definition of “personally identifiable information,” the role of consent, and the principles of purpose limitation and data minimization.
- Topol, E. J. (2019).* High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56.
- This review article provides an overview of the impact of AI in medicine at the levels of clinicians, health systems, and patients. The author also reviews the current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications. The results reveal that over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but the potential impact on the patient–doctor relationship remains unknown.
- Vayena, E., et al. (2018).* Machine learning in medicine: Addressing ethical challenges. PLoS Medicine, 15(11). https://doi.org/10.1371/journal.pmed.1002689
- The authors outline a four-stage approach to promoting patient trust and provider adoption: (1) alignment with data protection requirements, (2) minimizing the effects of bias, (3) effective regulation, and (4) achieving transparency. Their approach is grounded by referencing the disparate views held on artificial intelligence in healthcare by the general adult population, medical students, and healthcare decision-makers ascertained through recently conducted surveys.
- Vellido, A. (2019). Societal issues concerning the application of artificial intelligence in medicine. Kidney Diseases, 5(1), 11-17. 10.1159/000492428
- This paper reflects on a number of specific issues affecting the use of AI and ML in medicine, such as fairness, privacy and anonymity, explain-ability and interpretability, but also some broader societal issues, such as ethics and legislation. The author additionally argues that AI models must be designed from a human-centered perspective, incorporating human-relevant requirements and constraints.
- Verghese, A., et al. (2018). What this computer needs is a physician: Humanism and artificial intelligence. JAMA, 319(1), 19-20.
- This commentary highlights that while AI in medicine will lead to improved accuracy and efficiency, there is concern that the introduction of new tools may adversely impact physicians and lead to burnout, similar to electronic medical records. The authors state that we must aim for partnerships in which machines predict and perform tasks such as documentation, and physicians explain to patients and decide on action, bringing in the societal, clinical, and personal context. AI can enable physicians to spend more time caring for patients, actually improving the physician’s quality of work and the patient-physician relationship.
- Wachter, R. M., & Cassel, C. K. (2020). Sharing health care data with digital giants: Overcoming obstacles and reaping benefits while protecting patients. JAMA, 323(6), 507-508.
- In response to the steady stream of news updates around the entry and involvement of the major technology companies (e.g. Google, Apple, Amazon) into healthcare, this commentary proposes ideals for a collaborative path forward. The authors emphasize transparency (especially around financial disclosures and conflicts of interest), direct consultation with patients/patient advocacy groups, and data security.
- Wachter, S., et al. (2017).* Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99.
- The ‘right to explanation’ in the EU’s Global Data Protection Regulation (GDPR) is seen as a mechanism to enhance the accountability and transparency of AI enabled decision-making. However, the authors show that ambiguity and imprecise language in these regulations do not create well-defined rights and safeguards against automated decision-making. The authors propose a number of legislative and policy steps to improve the transparency and accountability of automated decision-making.
- van Wynsberghe, A. (2013). Designing robots for care: Care centered value-sensitive design. Science and Engineering Ethics, 19(2), 407–433. https://doi.org/10.1007/s11948-011-9343-6
- This article discusses a value-sensitive design approach as applied to the creation of care robots created to fill a role analogous to that of a human nurse. After outlining foundational theoretical understandings of values, care ethics, and care practices, the author synthesizes a context-specific framework for considering these issues in robot design. The author grounds this framework in the case study of already-implemented autonomous robots for lifting patients in the care home environment.
- Yu, K. H., et al. (2018).* Artificial intelligence in healthcare. Nature Biomedical Engineering, 2(10), 719-731.
- With recent progress in digitized data acquisition, machine learning, and computing infrastructure, AI applications are expanding into areas that were previously thought to be only the domain of human experts. This review article outlines recent breakthroughs in AI technologies and their biomedical applications, identifies the challenges for further progress in medical AI, and summarizes the economic, legal, and social implications of AI in healthcare.
Public Health and Global Health
- Davies, S. E. (2019). Artificial intelligence in global health. Ethics & International Affairs, 33(2), 181-192.
- Focusing largely on the topic of infectious disease, this paper explores the potential and limitations of artificial intelligence in the context of global health. The author contends that while AI may be effective in guiding responses to outbreak events, substantial ethical risks related to exacerbating healthcare quality disparities, diverting funding from otherwise-necessary structural improvements, and enabling human rights abuses under the guise of containment exist.
- Hadley, T. D., et al. (2020). Artificial intelligence in global health—A framework and strategy for adoption and sustainability. International Journal of Maternal and Child Health and AIDS, 9(1), 121.
- Applications of artificial intelligence (AI) in medicine are used by global health initiatives (GHI) to detect and mitigate public health inequities. Technological improvements, such as cloud computing and the widespread availability of smartphones, have paved the way for the uses of medical AI applications in resource-poor areas. Promising AI tools such as programs that can predict burn healing time from smartphone photos or technology that optimizes vaccine delivery have enabled limited resources to have a maximal impact. This article proposes a framework guiding the development of sustainable strategies for AI-driven GHI and outlining areas for future research.
- Kostkova, P. (2018). Disease surveillance data sharing for public health: The next ethical frontiers. Life Sciences, Society and Policy, 14(1). https://doi.org/10.1186/s40504-018-0078-x
- This article identifies three core ethical challenges with the use of digital data in public health: (1) data sharing across risk assessment tools, (2) the use of population-level data without compromising privacy, and (3) regulating how technology companies manipulate user data. The article places special emphasis on legislation and regulatory frameworks from the European Union.
- Luxtona, D. D. (2020). Ethical implications of conversational agents in global public health. Bulletin of the World Health Organization, 98(4), 285-287.
- Conversational agents, colloquially known as “chatbots”, could help address disparities in the access to mental health services or general health services in times of emergency (e.g. a natural disaster, pandemic, etc.). This article outlines core ethical issues of conversational agents: risk of bias, risk of harm, privacy, and inequitable access. The author concludes by alluding to the World Health Organization’s potential role in this space through the creation of a “cooperative international working group” to make recommendations on the design and deployment of conversational agents and other artificially intelligent tools.
- Mittelstadt, B., et al. (2018). Is there a duty to participate in digital epidemiology? Life Sciences, Society and Policy, 14, 9. https://doi.org/10.1186/s40504-018-0074-1
- This article explores the duty to participate in digital epidemiology, acknowledging that there are different risks to participants in comparison to traditional biomedical research. The authors outline eight justificatory conditions for participation in digital epidemiology that should be reflected upon “on a case-by-case basis with due consideration of local interests and risks”. Notably, the authors demonstrate how these justificatory conditions can be used in-practice in three case studies involving infectious disease surveillance, HIV screening, and detecting notifiable diseases in livestock.
- Murphy, K., et al. (2021). Artificial intelligence for good health: A scoping review of the ethics literature. BMC Medical Ethics, 22. https://doi.org/10.1186/s12910-021-00577-8
- This paper is an empirical review of the literature on the ethics of artificial intelligence in medicine. Most of the 103 papers included in the review focused on the ethics of artificial intelligence in healthcare, including robots, diagnostics, and precision medicine. The review points to a gap in the literature around the ethics of artificial intelligence in public health, as well as the ethics of global health. Common ethical concerns addressed by the literature were privacy, trust, accountability, responsibility, and bias.
- Panch, T., et al. (2019). Artificial intelligence: Opportunities and risks for public health. The Lancet Digital Health, 1(1), e13-e14.
- This article focuses on the social determinants that may influence public health outcomes regulated by artificial intelligence by weighing possible risks and benefits. Artificial intelligence has the potential to improve the efficacy of health services through protecting and promoting the health of populations. However, various risks that widen existing disparities such as equity, unemployment, and cost constraints need to be accounted for when using artificial intelligence for the advancement of public health outcomes.
- Paul, A. K., & Schaeferb, M. (2020). Safeguards for the use of artificial intelligence and machine learning in global health. Bulletin of the World Health Organization, 98(4), 282-284.
- This article outlines challenges that low- and middle-income countries (LMICs) must overcome to develop and deploy artificial intelligence and machine learning innovations. The authors emphasize that investments in these innovations by LMICs must be grounded in the realities of their health systems to enable success. The challenges outlined in this piece include: (1) improving the quality and use of data collected, (2) ensuring representation in these processes by marginalized groups, (3) establishing safeguards against bias, and (4) only investing in areas where health systems can operationalize innovations and deliver results.
- Salathé, M. (2018). Digital epidemiology: What is it, and where is it going? Life Sciences, Society and Policy, 14(1). https://doi.org/10.1186/s40504-017-0065-7
- This article provides a definition for the field of “digital epidemiology” and an outlook of how the field is poised to evolve in the coming years. The author provides a succinct introduction to how the increasing availability of data and computing power is transforming epidemiology, intended as a primer to more focused research regarding data analytics and the field’s unique ethical considerations.
- Samerski, S. (2018). Individuals on alert: Digital epidemiology and the individualization of surveillance. Life Sciences, Society and Policy, 14(1). https://doi.org/10.1186/s40504-018-0076-z
- This article provides a critical analysis of how digital epidemiology and the broader “eHealth” movement fundamentally changes the notion of health into a constant state of surveillance. The author argues that as predictive analytics dominates the discourse around population and individual-level health, we are at risk of entering a state of “modus irrealis” or helpless paralysis due to events that may or may not transpire. The views expressed in this article stand in sharp contrast to digital health proponents such as Dr. Eric Topol, who argue that these advances promote autonomy and self-efficacy.
- Samuela, G., & Derrick, G. (2020). Defining ethical standards for the application of digital tools to population health research. Bulletin of the World Health Organization, 98(4), 239-244.
- This article provides a process for ethics governance to be used at higher educational institutions during ex-post reviews of population health AI research. The governance model proposed consists of two levels: (1) the mandated entry of research products into an open-science repository and (2) a sector-specific validation of the research processes and algorithms. Through this ex-post review, the authors believe that the potential for AI-systems to cause harm will be reduced before they are disseminated.
- Schwalbe, N., & Wahl, B. (2020). Artificial intelligence and the future of global health. The Lancet, 395(10236), 1579-1586.
- This article discusses how artificial intelligence (AI) may address challenges unique to the field of global health by accelerating the achievement of the health-related sustainable development goals. AI-driven health interventions can aid in the diagnosis of various conditions, mortality risk assessment, disease outbreak prediction and surveillance, and health policy. While AI-driven health interventions have the potential to improve health outcomes in low and middle-income countries (LMICs), there currently exists no ethical or regulatory considerations required for its widespread use. Thus, guidelines for the development, testing, and application of AI-driven health interventions need to be established in order to facilitate equitable and ethical use.
- Smith, M. J., et al. (2020). Four equity considerations for the use of artificial intelligence in public health. Bulletin of the World Health Organization, 98(4), 290-292.
- Equity, the absence of avoidable or remediable differences among groups, is a foundational concept in global and public health. In this article, the authors outline four equity considerations when designing and deploying artificial intelligence and public health contexts: (1) the digital divide, (2) algorithmic bias and values, (3) plurality of values across systems, and (4) fair decision-making procedures.
- Vayena, E., & Madoff, L. (2019). Navigating the ethics of big data in public health. In A. C. Mastroianni, J. P. Kahn& N. P. Kass (Eds.), The Oxford Handbook of Public Health Ethics (pp. 354-367). Oxford University Press.
- This article provides an overview of the key ethical challenges for the use of big data in public health. The authors discuss issues such as: (1) privacy, (2) data control and sharing, (3) nonstate actors, (4) harm mitigation, (5) fair distribution of benefits, (6) civic empowerment, and (7) accountability. This article would serve as a useful introduction to those new to the field of public health as the authors ground their discussion around key areas of public health such as health promotion, surveillance, emergency preparedness and response, and comparative effectiveness research.
- Wahl, B., et al. (2018). Artificial intelligence (AI) and global health: How can AI contribute to health in resource-poor settings? BMJ Global Health, 3(4). http://dx.doi.org/10.1136/bmjgh-2018-000798
- Much of the discourse around AI in medicine has focused on high-resource settings, which risks further propagating the digital divide between high- and low/middle-income countries. This review is one of the first to shift this discourse and do so in a solutions-focused manner. The authors draw attention to several important enablers to AI in low-resource settings such as mobile health, open-source electronic medical record systems, and cloud computing.
Chapter 38. Ethics of AI in Law: Basic Questions (Harry Surden)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.46
- Agrawal, A., et al. (2019). Exploring the impact of artificial intelligence: Prediction versus judgment. Information Economics and Policy, 47, 1-6.
- This article argues that because prediction allows riskier decisions to be taken, prediction has an impact on observed productivity although it could also increase the variance of outcomes. However, the authors also demonstrate that better prediction may result in different judgments depending on the context and therefore not all human judgment will be a complement to AI. Nonetheless, the authors argue that humans will delegate some decisions to machines even when the decision would be superior with human input.
- Agrawal, A., et al. (2018).* Prediction machines: The simple economics of artificial intelligence. Harvard Business Review Press.
- The authors show how the predictive power of AI can be used in the face of uncertainty, to increase productivity, and to develop strategies. The authors employ an economic framework to explain the impacts of this adoption of AI.
- Alarie, B., et al. (2018). How artificial intelligence will affect the practice of law. University of Toronto Law Journal, 68(supplement 1), 106-124. https://doi.org/10.3138/utlj.2017-0052
- This article outlines the current and anticipated impact of AI on the legal profession and legal services. The article tracks the history of legal information and discusses how AI can use legal data to answer legal questions and develop predictive tools for the legal domain. The article suggests that AI could transform how lawyers perform legal work and deliver legal services.
- Angwin J., Larson J. (2016).* Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- The authors cite anecdotal and sentencing patterns to argue that algorithms tasked to predict the potential for future criminal activities of a particular person are biased along racial lines.
- Barocas, S., & Andrew, D. (2016).* Big data’s disparate impact. California Law Review, 104(3), 671. https://doi.org/10.15779/Z38BG31
- This article examines concerns that flawed or biased data can interfere with the supposed ability of algorithmic methods to eliminate human biases from the decision-making process, through the lens of American antidiscrimination law—more particularly, through Title VII’s prohibition of discrimination in employment. The authors argue that finding a solution to this issue will require more than mitigation of prejudice and bias; it will require a wholesale reexamination of the meanings of “discrimination ” and “fairness”.
- Bloch-Wehba, H. (2019). Access to algorithms. Fordham Law Review, 88(4), 1265-1314.
- This article describes concerns regarding the use of opaque algorithms in the public sector, particularly in healthcare, education, and criminal law enforcement. To address these concerns and promote public accountability and transparency in automated decision-making, the article proposes drawing on freedom of information laws and the constitutional right to freedom of expression.
- Brayne, S., & Christin, A. (2021). Technologies of crime prediction: The reception of algorithms in policing and criminal courts. Social Problems, 68(3), 608-624.
- This study examines the use of predictive algorithms in policing and criminal courts. It discusses how predictive technologies “displace discretion” to less visible and less accountable areas of organizations and examines the implications of this shift for the administration of justice in the age of big data.
- Calo, R. (2018).* Artificial intelligence policy: A primer and roadmap. University of Bologna Law Review, 3(2), 180-218.
- The essay aims to help policymakers, investors, scholars, and students understand the contemporary policy environment around artificial intelligence and the key challenges it presents. It aims to provide a basic roadmap of the issues that surround the implementation of AI in the current environment.
- Casey, B., et al. (2019). Rethinking explainable machines: The GDPR’s “right to explanation” debate and the rise of algorithmic audits in enterprise. Berkeley Technology Law Journal, 34(1), 143-188.
- This article discusses the interpretation of, and debate surrounding, the General Data Protection Regulation’s “right to explanation.” The article suggests that this right, coupled with the practices of algorithmic auditing and data protection by design, will have sweeping legal and practical implications for the design, testing, and deployment of machine learning systems.
- Citron, D. K. (2008).* Technological due process. Washington University Law Review, 8(5), 1249-1313.
- This article aims to demonstrate how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework for technological due process to ensure that it preserves transparency, accountability, and accuracy of rules in automated decision-making systems.
- Coglianese, C., & Lehr, D. (2019). Transparency and algorithmic governance. Administrative Law Review, 71(1), 1-56.
- This article examines the use of machine learning in government decision-making by inquiring whether the opaqueness of machine learning can be reconciled with the legal principles of governmental transparency. By distinguishing between different types of transparency, the authors suggest that the opaqueness of machine learning does not pose a legal barrier to the responsible use of machine learning by governmental authorities.
- Corbett-Davies, S., et al. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 797-806). Association for Computing Machinery. https://doi.org/10.1145/3097983.3098095
- This paper argues that the objective of algorithmic fairness should be reframed as an optimization of maximization of public safety while satisfying formal fairness constraints designed to reduce racial disparities.
- Deeks, A. (2019). The judicial demand for explainable artificial intelligence. Columbia Law Review, 119(7), 1829-1850.
- This article argues that, in confronting machine learning algorithms in criminal, administrative, and civil cases, judges should demand explanations for algorithmic decisions. The author suggests that if judges demand such explanations they will be able to make a unique contribution to shaping the expectations, rules, and norms in the emerging field of explainable AI.
- Elyounes, D. A. (2019). Bail or jail? Judicial versus algorithmic decision-making in the pretrial system. Columbia Science and Technology Law Review, 21, 376.
- The paper examines the deployment of artificial intelligence in existing risk assessment tools and whether it realizes the fears emphasized by opponents of automation or improves the criminal justice system. Focusing on the pretrial stage, it provides an in-depth examination of the seven most commonly used risk-assessment tools and presents policy recommendations.
- Hacker, P., et al. (2020). Explainable AI under contract and tort law: Legal incentives and technical challenges. Artificial Intelligence and Law, 28(4), 1-25. https://doi.org/10.1007/s10506-020-09260-6
- This article discusses the legal rules and discourse concerning the explainability requirements imposed on AI systems. Using case studies from medical diagnostics and corporate law, the article indicates that the notion of explainability extends into legal fields beyond data protection law. The authors present a technical case study examining the tradeoff between accuracy and explainability.
- Hartmann, K., & Wenzelburger, G. (2021). Uncertainty, risk and the use of algorithms in policy decisions: A case study on criminal justice in the USA. Policy Sciences, 54(2), 269-287.
- This paper examines algorithmic decision making in the public domain with a particular focus on risk assessment tools in the criminal justice sector. The authors argue that the use of ADM has deeply transformed the decision-making process, mainly because the evidence generated by the algorithm introduces a notion of statistical prediction. To illustrate their argument, the authors present a study examining the implementation of risk assessment software in the criminal Justice system in Eau Claire County, Wisconsin, USA.
- Hervey, M., & Lavy, M. (2021). The law of artificial intelligence. Sweet & Maxwell.
- This book examines how existing civil and criminal law will apply to AI and explores the role of emerging laws designed specifically for AI. Topics include liability arising in connection with the use of AI, the impact of AI on intellectual property, data protection, smart contracts, and the deployment of AI in legal services and the justice system.
- Kaminski, M. E. (2019).* The right to explanation, explained. Berkeley Technology Law Journal, 34(1), 189-218. https://doi.org/10.15779/Z38TD9N83H
- This article explores how the EU’s General Data Protection Regulation (GDPR) establishes algorithmic accountability: laws governing decision-making by complex algorithms or AI. It argues that the GDPR provisions on algorithmic accountability, in addition to including a right to explanation (a right to information about individual decisions made by algorithms), could be broader, stronger, and deeper than the preceding requirements of the Data Protection Directive.
- Kleinberg, J. (2018).* Inherent trade-offs in algorithmic fairness. In Abstracts of the 2018 ACM International Conference on Measurement and Modelling of Computer Systems (pp. 40-40). https://doi.org/10.1145/3219617.3219634
- This article explores the way classifications done by algorithms create tension between competing notions of what it means for such a classification to be fair to different groups. The authors then present several of the key fairness conditions and the inherent trade-offs between these conditions.
- Kleinberg, J., et al. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv:1609.05807
- The article explores how algorithmic classification involves tension between competing notions of what it means for a probabilistic classification to be fair to different groups. After formalizing three fairness conditions that lie at the heart of these debates, the authors show that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Thus, the article argues that key notions of fairness are incompatible with each other, and hence seeks to provide a framework for thinking about the trade-offs between them.
- Kroll, J. A., et al. (2016).* Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633-706.
- This article argues that transparency will not solve the problems of automated decision systems such as returning potentially incorrect, unjustified, or unfair results. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process.
- Land, M. K., & Aronson, J. D. (2020). Human rights and technology: New challenges for justice and accountability. Annual Review of Law and Social Science, 16, 223-240.
- The paper addresses different challenges in the field of technology and human rights. In particular, the authors focus on the use of AI in decision making both in the public and private sector – e.g., in criminal justice, employment, public service, and financial contexts. The authors suggest that responding to the impact of new technologies, such as AI decision making, calls for a fundamental shift in the approach to technology and private action, including increased transparency, oversight, and scrutiny.
- Liu, H. W., et al. (2019). Beyond State v Loomis: Artificial intelligence, government algorithmization and accountability. International Journal of Law and Information Technology, 27(2), 122-141.
- This article focuses on the normative implications of using data-driven technologies in various government functions, including the challenge it poses to due process, equal protection, and transparency, as well as the accountability of the public sector in these areas. To develop the normative arguments and illustrate these challenges, the authors focus on the use of data analytics in the criminal justice system, highlighting concerns and perspectives raised by a discussion of State v. Loomis.
- Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835-850.
- This article argues that developers have a responsibility for their algorithms later in use, and that firms should be responsible not only for the value-laden-ness of an algorithm but also for designing who-does-what within the algorithmic decision. Thus, firms developing algorithms are accountable for designing how large a role individuals will be permitted to take in the subsequent algorithmic decision.
- Miller, T. (2019).* Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.
- This paper argues that researchers and practitioners who seek to make their algorithms more understandable should utilize research done in the fields of philosophy, psychology, and cognitive science to understand how people define, generate, select, evaluate, and present explanations, and account for how people employ certain cognitive biases and social expectations towards the explanation process.
- Mulligan, D., & Bamberger, K. (2018).* Saving governance-by-design. California Law Review, 106(3), 697-784.
- This article argues that “governance-by-design”—the purposeful effort to use technology to embed values—is quickly becoming a significant influence on policy making. Furthermore, the existing regulatory system is fundamentally ill-equipped to prevent technological based governance from subverting public governance.
- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
- This book explores and analyses the results generated by Google search algorithms and argues that search algorithms are able to reflect racist biases as the algorithms created for such search engines reflect the biases and values of the people that created them.
- Pasquale F. (2015).* The black box society: The secret algorithms that control money and information. Harvard University Press.
- The author explores the power of ‘hidden algorithms’. He argues that such algorithms permit self-serving and reckless behavior and how powerful interests abuse the secrecy of these algorithms for profit. Thus, transparency must be demanded of firms, such that they accept as much accountability as they impose on others.
- Pasquale, F. (2019). A rule of persons, not machines: The limits of legal automation. George Washington Law Review, 87, 1.
- This paper focuses on the automation of law and legal services. It explores the legal problems that occur in the translation of language into computer code and suggests an alternative approach to automation – technology used as a complementary tool to the attorney’s skills rather than being the attorney’s replacement. This approach would enable workers in the legal profession to maintain the complexity and subtlety of the legal language while still benefiting from emerging technologies.
- Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Harvard University Press.
- Recalling Isaac Asimov’s Three Laws of Robotics, the author proposes four new laws for governing AI. First, AI should complement professionals, not replace them. Second, AI should not counterfeit humanity. Third, AI should not intensify zero-sum arms races. Fourth, AI must always indicate the identity of their creator(s), controller(s) and owner(s). The book presents examples and case studies in healthcare, education, media, and other domains to support these new laws for the governance of AI.
- Prince, A. E., & Schwarcz, D. (2020). Proxy discrimination in the age of artificial intelligence and big data. Iowa Law. Review, 105, 1257.
- This paper examines discriminatory patterns in modern AI decisions. The authors argue that “proxy discrimination”, a neutral practice that disproportionately harms members of a protected class in ways that cannot be captured directly, is usually left unchecked, poses risks for new in new forms of discrimination, and undermines the core goals of all antidiscrimination regimes.
- Richards, N. M. (2012).* The dangers of surveillance. Harvard Law Review, 126(7), 1934-1965.
- This article aims to explain and highlight the harms of government surveillance. The author uses work from multiple disciplines such as law, history, literature, and the work of scholars in the emerging interdisciplinary field of “surveillance studies,” to define what those harms are and why they matter.
- Selbst, A. D., & Barocas, S. (2018).* The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085-1139.
- The authors aim to show what makes decisions made by algorithms seem inexplicable, by examining what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation.
- Speicher, T., et al. (2018). A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2239-2248). Association for Computing Machinery. https://doi.org/10.1145/3219819.3220046
- This paper aims to explore how to determine what makes one algorithm more unfair than another. The authors use existing inequality indices from economics to measure how unequally the outcomes of an algorithm benefit different individuals or groups in a population.
- State v. Loomis, 881 N.W.2d 749 (Wis. 2016). www.wicourts.gov/sc/opinion/DisplayDocument.pdf?content=pdf&seqNo=171690
- A decision of the Wisconsin Supreme Court case from 2016, in which the Court held that judges may consult an algorithmic recidivism assessment program. The court emphasized that the algorithmic assessment must not replace the judge’s discretion. The algorithmic risk assessment should merely be aimed at providing the court with more complete and accurate information to decide the case.
- Surden, H. (2019).* Artificial intelligence and law: An overview. Georgia State University Law Review, 35(4), 1305-1337.
- This paper aims to provide a concrete survey of the current applications and uses of AI within the context of the law, without straying into discussions about AI and law that are futurist in nature. It aims to highlight a realistic view that is rooted in the actual capabilities of AI technology as it currently stands.
- Susskind, R. E., & Susskind, D. (2015). The future of the professions: How technology will transform the work of human experts. Oxford University Press.
- The authors argue that our current professions are antiquated, opaque and no longer affordable, and that the expertise of their best is enjoyed only by a few, and thus present an exploration into the ethical issues that arise when machines can out-perform human beings at most tasks. The authors explore how technological change will affect prospects for employment, who should own and control online expertise, and what tasks should be reserved exclusively for people.
- Strandburg, K. J. (2019). Rulemaking and inscrutable automated decision tools. Columbia Law Review, 119(7), 1851-1886.
- This article discusses the role of explanation in developing criteria for automated government decision making and rulemaking. The article analyzes whether, and how, the inscrutability of automated decision tools undermines the traditional functions of explanation in rulemaking. It contends that providing explanations about decision tool design, function, and use are helpful measures and can perform some of these traditional functions.
- Turner, J. (2018). Robot rules: Regulating artificial intelligence. Springer.
- This book addresses the legal and ethical frameworks for regulating activities, rights, and responsibilities in connection with AI actors. The book discusses who is, and should be, liable for the actions of AI and considers the possibility of granting rights to AI entities. The book suggests that new legal institutions and structures are needed to confront these challenges.
Chapter 39. Beyond Bias: “Ethical AI” in Criminal Law (Chelsea Barabas)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.47
- Barocas, S., & Andrew, D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671. https://doi.org/10.15779/Z38BG31
- This article argues that algorithms should not be taken as sufficient tools for making impartial and fair decisions due to the impacts of pervasive biases in the data they use. The authors draw on the disparate impact doctrine developed in American anti-discrimination law to clarify the implicit bias present in algorithms. The authors highlight the significant difficulties in addressing algorithmic discrimination, including both the technical challenges within data mining practices and the legal challenges beyond. As a result, they conclude that “fairness” and “discrimination” may need to be entirely re-examined.
- Benjamin, R. (2016).* Catching our breath: Critical race STS and the carceral imagination. Engaging Science, Technology, and Society, 2, 145-156.
- This article uses science and technology studies along with critical race theory to examine the proliferation and intensification of carceral approaches to governing human life. The authors argue in favor of an expanded understanding of “the carceral” that extends beyond the domain of policing to include forms of containment that make innovation possible in the contexts of health and medicine, education and employment, border policies, and virtual realities.
- Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81, 149-159. http://proceedings.mlr.press/v81/binns18a.html
- This article discusses research on fair AI and algorithmic decision-making by drawing parallels to contemporary political philosophy. By considering various philosophical accounts of discrimination and egalitarianism, the author delineates how political philosophy can shed light on AI fairness research where data-driven and algorithmic methods do not. The author argues that fairness, narrowly construed as a property of algorithms and data, does not adequately address the context-sensitive questions of justice surrounding these sociotechnical systems.
- Bosworth, M. (2019). Affect and authority in immigration detention. Punishment & Society, 21(5), 542-559.
- This article considers the relationship between authority and affect by drawing on a long-term research project across a number of British Immigration Removal Centers (IRCs). The author argues that staff authority rests on an abrogation of their self rather than engagement with the other. This is in contrast to much criminological literature on the prison, which advances a liberal political account in which power is constantly negotiated and based on mutual recognition.
- Brown, M., & Schept, J. (2017).* New abolition, criminology and a critical carceral studies. Punishment & Society, 19(4), 440-462.
- This article argues that criminology has been slow to open up a conversation about decarceration and abolition. The authors advocate for and discuss the contours of critical carceral studies, a growing interdisciplinary movement for engaged scholarly and activist production against the carceral state.
- Chugh, N. (2021). Risk assessment tools on trial: Lessons learned for “Ethical AI” in the criminal justice system. 2021 IEEE International Symposium on Technology and Society (ISTAS), 1–5. https://doi.org/10.1109/ISTAS52410.2021.9629143
- This paper considers the ethical challenges posed by risk assessments and the implementation of AI in the criminal justice system by focusing on the reasoning and following judicial treatment of the Ewert v Canada case. The author suggests three primary lessons learned: (1) that data-driven decision-making, as an early form of AI, has been used in the Canadian justice system for decades; (2) that Canadian common law requires courts to prioritize individual factors and to consider the systemic constraints upon defendants, and in particular, Indigenous defendants; and (3) that despite risk and contrary directives in law, risk assessments and AI continued to be used and, concerningly, their use continues to be expanded.
- Corbett-Davies, S., et al. (2017).* Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 797-806. Association for Computing Machinery. https://doi.org/10.1145/3097983.3098095
- The article aims to reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. The authors show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds.
- Davis, J. L., et al. (2021). Algorithmic reparation. Big Data & Society, 8(2), 20539517211044810. https://doi.org/10.1177/20539517211044808
- The authors argue that existing techniques to increase fairness in Machine Learning (ML), based on mathematical techniques like classification parity or calibration standards, fall short and operate upon “algorithmic idealism” that cannot address systemic, intersectional stratifications. The authors instead suggest the practice of “algorithmic reparation” which utilises reparative algorithms rooted in theories of intersectionality and which serve as a foundation for “building, evaluating, adjusting, and when necessary, omitting and eradicating machine learning systems.” This article highlights injustices with respect to the use of ML in criminal sentencing and algorithmic policing, with a view to why traditional concepts of fairness, such as reducing bias, are insufficient.
- Delbert, E. S. (1995).* Lies, damn lies, and arrest statistics. Center for the Study and Prevention of Violence.
- This paper argues that most research on the parameters of a criminal career that utilizes arrest data to estimate the underlying behavioral dynamics of criminal activity is flawed. The author argues that this generalization of findings from analyses of arrest records to the underlying patterns and dynamics of criminal behavior and characteristics of offenders in the general population are likely to lead to incorrect conclusions, ineffective policies and practices, and ultimately undermine our efforts to understand, prevent, and control criminal behavior.
- Ferguson, A. G. (2016).* Policing predictive policing. Washington University Law Review, 94(5), 1109-1189.
- This article examines predictive policing’s evolution and aims to provide a practical and theoretical critique of this new policing strategy that promises to prevent crime before it happens. Building on insights from scholars who have addressed the rise of risk assessment throughout the criminal justice system, the author provides an analytical framework to police new predictive technologies.
- Flynn, A., et al. (2021). Disrupting and preventing deepfake abuse: Exploring criminal law responses to AI-facilitated abuse. In The Palgrave Handbook of Gendered Violence and Technology (pp. 583-603). Palgrave Macmillan, Cham.
- In addition to expanding options for judicial responses to crime, AI also makes entirely new crimes possible. This paper examines the impact of “deepfakes”, an AI image generation technqiue sometimes used criminally to synthesise realistic fake images/videos that show a victim apparently performing, e.g., sexual acts. Since this crime does not require any direct contact between the perpetrator and the victim, it opens up new avenues for abuse. The authors suggest possible criminal law responses to deep fake abuse and argue that the area of AI-enabled crime requires further research.
- Green, B., & Chen, Y. (2019). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 90-99). Association for Computing Machinery. https://doi.org/10.1145/3287560.3287563
- Despite the wealth of research developing algorithmic tools for risk assessment, very little work has focused on evaluating the operation of these tools by actual human decision-makers. Through an online laboratory experiment, this paper quantifies how human operators assess defendant crime risk when guided by an algorithmic risk score. Study participants were unable to reach the algorithm’s own performance even when exposed to its predictions. Furthermore, participants could not discern when they were making high-quality predictions and discriminated more against Black defendants when shown the algorithm’s scores.
- Hamilton, M. (2021). Evaluating algorithmic risk assessment. New Criminal Law Review, 24(2), 156–211. https://doi.org/10.1525/nclr.2021.24.2.156
- This paper considers the success or validity of risk assessment tools in terms of their ability to discriminate between classes and their predictive ability (i.e. absolute accuracy against future results). The author focuses on one popular US risk assessment tool, known as the Public Safety Assessment, finding it to have differential validity across different jurisdictions. Notably, the author remains optimistic about the use of risk assessment in criminal justice, including their ability to correct for biases in human judgment, suggesting that jurisdictions using off-the-shelf risk assessment tools validate the tool for their specific jurisdiction and perform adjustments to the classification and calibration properties of the model.
- Harcourt, B. E. (2008).* Against prediction: Profiling, policing, and punishing in an actuarial age. University of Chicago Press.
- In this book, the author argues prediction tools increase the overall amount of crime in society, depending on the relative responsiveness of the profiled populations to heightened security. The author proposes a turn to randomization in punishment and policing, against prediction.
- Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900-915. https://doi.org/10.1080/1369118X.2019.1573912
- The author critically analyzes three limits of fairness and anti-discrimination discourse in capturing the social injustices latent in Big Data. Firstly, this research is narrowly centered on the law’s concern with individual perpetrators, instead of addressing broader systemic injustices. Secondly, anti-discrimination discourse is especially focused on the notion of disadvantage on singular axes, such as race, without considering intersectional injustices. Thirdly, fairness is concerned with distributions of resources and opportunities, without acknowledging how social infrastructure enables the utilization of these resources.
- Hogan, N. R., et al. (2021). On the ethics and practicalities of artificial intelligence, risk assessment, and race. Journal of the American Academy of Psychiatry and the Law, 49(3), 326–334. https://doi.org/10.29158/JAAPL.200116-20
- This paper offers a review of ethical concerns surrounding the practical application of AI prediction models to violence risk assessments (that might otherwise be performed by forensic psychiatrists). In addition to reviewing ethical risks, the authors include an overview of the actual systems currently in use and empirical evidence of racial bias in these models. They differentiate violence risk assessment from other simpler medical classification problems at which AI might excel.
- Huq, A. Z. (2018). Racial equity in algorithmic criminal justice. Duke Law Journal, 68(6), 1043-1134.
- This article considers the interaction of algorithmic tools for predicting violence and criminality that are increasingly deployed in policing, bail, and sentencing, with the enduring racial dimensions of the criminal justice system. The author then argues that a criminal justice algorithm should be evaluated in terms of its long-term, dynamic effects on racial stratification.
- Jefferson, B. J. (2017). Digitize and punish: Computerized crime mapping and racialized carceral power in Chicago. Environment and Planning D: Society and Space, 35(5), 775-796.
- This article aims to put critical geographic information systems theory into discussion with critical ethnic studies and thus argue that CLEARmap, the Chicago police’s digital mapping application, does not passively “read” urban space, but provides ostensibly scientific ways of reading and policing negatively racialized fractions of surplus labor in ways that reproduce, and in some instances extend, the reach of carceral power.
- Kleinberg, J., et al. (2018). Human decisions and machine predictions. The Quarterly Journal of Economics, 133(1), 237-293. https://doi.org/10.1093/qje/qjx032
- This article investigates the extent to which predictions made by AI systems can outperform humans in judicial contexts. Specifically, the authors consider historical bail decisions made in New York and build algorithmic models of how judges balance the outcomes of incarceration and release. They find that their models can reduce failure-to-appear and crime rates by up to 24.7% with no change in jailing rates, or jailing rates by up to 41.9% with no change in crime rates. Additionally, they demonstrate that their method achieves these improvements while simultaneously improving racial parity.
- Kleinberg, J., et al. (2018).* Algorithmic fairness. AEA Papers and Proceedings, 108, 22-27.
- This paper proposes that concerns that algorithms may discriminate against certain groups that have led to numerous efforts to ‘blind’ the algorithm to race are misleading and may do harm. Thus, the authors argue that equity preferences can change how the estimated prediction function is used (e.g., different thresholds for different groups) but the function itself should not change.
- Kleinberg, J., et al. (2018). Discrimination in the age of algorithms. Journal of Legal Analysis. https://doi.org/10.1093/jla/laz001
- This paper argues that the use of algorithms will make it easier to examine and interrogate the entire legal process and therefore identify whether discrimination has occurred.
- Kleinberg, J., et al. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv:1609.05807v2
- The article explores how algorithmic classification involves a tension between competing notions of what it means for a probabilistic classification to be fair to different groups. After formalizing three fairness conditions that lie at the heart of these debates, the authors show that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Thus, the authors argue that key notions of fairness are incompatible with each other, and hence they seek to provide a framework for thinking about the trade-offs between them.
- Lyon, D. (2014). Surveillance, Snowden, and big data: Capacities, consequences, critique. Big Data & Society, 1(2). https://doi.org/10.1177%2F2053951714541861
- This article explores the extent the Snowden disclosures indicated that Big Data practices are becoming increasingly important to surveillance, and if Big Data is gaining ground in this area, then how this indicates changes in the politics and practices of surveillance. The author analyses the capacities of Big Data and their social-political consequences and then comments on the kinds of critique that may be appropriate for assessing and responding to these developments.
- Mayson, S. G. (2018). Bias in, bias out. Yale Law Journal, 128(8), 2218-2300.
- This paper argues strategies currently put in place to mitigate algorithmic discrimination are at best superficial and at worst counterproductive because the source of racial inequality in risk assessment lies neither in the input data, nor in a particular algorithm, nor in algorithmic methodology per se. The problem is the nature of prediction itself since all prediction looks to the past to make guesses about future events. In a racially stratified world, any method of prediction will project the inequalities of the past into the future.
- McKay, C. (2020). Predicting risk in criminal procedure: Actuarial tools, algorithms, AI and judicial decision-making. Current Issues in Criminal Justice, 32(1), 22-39.
- This paper looks at the various ways that machine learning, algorithmic, and actuarial techniques have become part of the criminal justice system, specifically focusing on risk assessment. The author highlights that the algorithms used in practice are often proprietary products whose code and exact functioning are kept private. The use of these proprietary algorithms then limits the explainability of decisions, which makes the criminal justice process opaque to all participants.
- Mohamed, S., et al. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(4), 659-684. https://doi.org/10.1007/s13347-020-00405-8
- This paper explores the hidden power dynamics underlying AI development through the lens of coloniality in science, specifically, critical decolonial theory. The authors illustrate how algorithms involve and lead to the oppression, exploitation, and dispossession of the vulnerable. To guide decolonialized AI design, they argue that historical lessons of resistance lead to three key tactics: supporting critical technical practices, establishing reciprocal engagements between the powerful and powerless, and strengthened political communities in AI.
- Movva, R. (2021). Fairness deconstructed: A sociotechnical view of “fair” algorithms in criminal justice. ArXiv:2106.13455 [Cs]. http://arxiv.org/abs/2106.13455
- This paper uses a socio-technical lens to highlight a gap between algorithmic fairness in theory and in practice in the criminal justice system. It argues that (1) analyses of fairness that consider algorithmic output without the relevant social context are insufficient and (2) that most literature on ML fairness in criminal justice fails to consider epistemological concerns about underlying crime data. The author suggests that AI should not be built to amplify existing power imbalances through risk assessment, but instead data science should be used to help understand the “root causes of structural marginalization.”
- Muhammad, K. G. (2008).* The condemnation of blackness. Harvard University Press.
- This article reveals the influence ideas such as deeply embedded notions of Black people as a dangerous race of criminals by explicit contrast to working-class whites and European immigrants, the idea of Black criminality, and African Americans’ own ideas about race and crime have had on urban development and social policies.
- Ogbonnaya-Ogburu, I. F., et al. (2020). Critical race theory for HCI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-16). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376392
- Seeking to guide the development of racially diverse sociotechnical systems, this paper calls for the introduction of critical race theory insights to the field of human-computer interaction. The authors provide an overview of the central tenets of critical race theory, such as the every-day universality of racism, and identify key areas for their incorporation. Additionally, the authors apply the storytelling methodology of critical race theory to describe their own experiences as racialized people performing computational research. From this discussion, the authors conclude with a call for anti-racist action for HCI practitioners.
- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://nyupress.org/9781479837243/algorithms-of-oppression/
- This book proposes that search engines are neither benign nor neutral, and instead operate to conceal and amplify social biases. The author unveils the distorted representation of women and racial minorities on platforms like Google’s image search and discusses the harms inflicted on these communities during the creation and use of search engines. The author argues that the private interests of a few monopolistic sites enable this pattern of digital oppression.
- Pleiss, G., et al. (2017). On fairness and calibration. Advances in Neural Information Processing Systems, 30, 5680-5689.
- This article investigates the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. The authors argue that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and they show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier.
- Roberts, D. E. (2003). The social and moral cost of mass incarceration in African American communities. Stanford Law Review, 56(5), 1271-1306.
- While many studies focus on the potential causes of racial discrepancies in the American prison system, this article instead examines the costs inflicted by the mass incarceration of African Americans. It considers three parts of Black communities that are harmed through mass imprisonment: social networks, social norms, and social citizenship. The author contends that these community-level harms illustrate the disproportionate attention paid to criminality compared to other risks of incarceration, such as the political insubordination of African Americans. The author proposes that the justifications for punishment need to be radically rethought in this context.
- Selbst, A. D. (2017). Disparate impact in big data policing. Georgia Law Review, 52(1), 109-195.
- This paper argues that the degree to which predictive policing systems incur discriminatory results is unclear to the public and to the police themselves, largely because there is no incentive in place for a department focused solely on “crime control” to spend resources asking the question. Thus, the author proposes a new regulatory proposal centred on “algorithmic impact statements” to mitigate the issues created by predictive systems.
- Stevenson, M. (2018).* Assessing risk assessment in action. Minnesota Law Review 103(1), 303-384.
- This article documents the impacts of risk assessment in practice and argues that risk assessment had no effect on racial disparities in pretrial detention once differing regional trends were accounted for. The author uses data from more than one million criminal cases, and they highlight that a 2011 law making risk assessment a mandatory part of the bail decision led to a significant change in bail setting practice, but only a small increase in pretrial release.
Chapter 40. “Fair Notice” in the Age of AI (Kiel Brennan-Marquez)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.48
- Abu-Elyounes, D. (2020). Contextual fairness: A legal and policy analysis of algorithmic fairness. University of Illinois Journal of Law, Technology and Policy, 2020(1), 1-55.
- This paper discusses the legal limitations of computer science notions of fairness and suggests a typology of matching CS notions to their corresponding legal mechanisms. The paper concludes that fairness is contextual, and each notion corresponds and is suitable for a particular policy domain. The paper provides some examples for possible applicability of the CS notions to some policy domains.
- Atkinson, K., et al. (2020). Explanation in AI and law: Past, present and future. Artificial Intelligence, 289, 103387. https://doi.org/10.1016/j.artint.2020.103387
- The authors offer a review of the different techniques that are used to explain automated decisions made in legal contexts, describing how these tools have developed, and flagging gaps that remain. They argue that law is an exemplary context in which to study the problem of AI explainability due to the high standards of transparency required for the legal context.
- Bambauer, J., & Zarsky, T. (2018). The algorithm game. Notre Dame Law Review, 94, 1.
- The paper addresses so-called “algorithmic gaming”, a dynamic process by which both subjects and algorithms change in response to one another. Notably, this process impacts the fairness and equity of the algorithms. The authors argue that the law already regulates this “algorithmic dance” in direct and indirect ways. However, they suggest it should be done in a structural way, so that lawmakers would make their value hierarchies more transparent. Finally, they present a basic, suggested framework to do so.
- Brennan-Marquez, K. (2017).* Plausible cause: Explanatory standards in the age of powerful machines. Vanderbilt Law Review, 70(4), 1249-1302.
- This article argues that statistical accuracy, though important, is not the crux of explanatory standards. The value of human judges lies in their practiced wisdom rather than analytic power. The author replies to a common argument against replacing judges that claims intelligent machines are not (yet) intelligent enough to take up the mantle. The author’s reply highlights that powerful intelligent algorithms currently exist, and furthermore, that judging is not about intelligence, but rather, it’s about prudence.
- Brennan-Marquez, K. (2019).* Extremely broad laws. Arizona Law Review, 61(3), 641-666.
- This article argues that extremely broad laws offend due process because they afford state officials practically boundless justification to interfere with private life. Thus, the article explores how courts might tackle the breadth problem in practice—and ultimately suggests that judges should be empowered to hold statutes “void-for-breadth.”
- Bushway, S. D. (2020). “Nothing is more opaque than absolute transparency”: The use of prior history to guide sentencing. Harvard Data Science Review, 2(1). https://doi.org/10.1162/99608f92.468468af
- The author responds to Rudin and colleagues (2020), arguing that their focus on transparency and advocacy for simplified risk algorithms ignores the fact that using criminal histories for these predictions is not only unfair, but also unreliable. The author makes this case by critiquing how past sentencing reforms have sought to standardize sentencing by removing judicial discretion based upon the flawed assumption that a criminal history is a reliable indicator of human behavior.
- Citron, D. K., & Pasquale, F. (2014).* The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1-34.
- This article argues that while automated scoring may be pervasive and consequential, it is also opaque and lacking oversight. Thus, automated scoring must be implemented alongside protections, such as testing scoring systems to ensure their fairness and accuracy, otherwise systems could launder biased and arbitrary data into powerfully stigmatizing scores.
- Cohen, J. E. (2012). Configuring the networked self: Law, code, and the play of everyday practice. Yale University Press.
- This book argues that legal and technical rules governing flows of information are out of balance, as flows of cultural and technical information are overly restricted, while flows of personal information often are not restricted at all.
- Crawford, K., & Schultz, J. (2014).* Big data and due process: Toward a framework to redress predictive privacy harms. Boston College Law Review, 55(1), 93-128.
- This article highlights how Big Data has vastly increased the scope of personally identifiable information and how poor execution of Big Data methodology may create additional harms by rendering inaccurate profiles that nonetheless impact an individual’s life and livelihood. Thus, the article argues for a mitigation of predictive privacy harms through a right to procedural data due process.
- Delacroix, S. (2018). Computer systems fit for the legal profession? Legal Ethics, 21(2), 119-135.
- This article argues against the conception that wholesale automation is both legitimate and desirable, provided it improves the quality and accessibility of legal services by presenting the claim that this comes at the cost of moral equality. In response, the authors propose designing systems that better enable legal professionals to live up to their specific responsibility by ensuring that they are profession specific, in contrast to generalized automation.
- Ferguson, A. G. (2019). The rise of big data policing: Surveillance, race, and the future of law enforcement. NYU Press.
- This book discusses the consequences of big data and algorithm-driven policing and its impact on law enforcement. It then explores how technology will change law enforcement and its potential threat to the security, privacy, and constitutional rights of citizens.
- Froomkin, A. M., et al. (2019). When AIs outperform doctors: Confronting the challenges of a tort-induced over-reliance on machine learning. Arizona Law Review, 61(1), 33-100.
- This article argues that currently a combination of human and machine may be more effective than either alone in medical diagnoses, but that in time, machines will improve and become more effective, thus creating overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Thus, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings.
- Grimmelmann, J., & Westreich, D. (2017).* Incomprehensible discrimination. California Law Review Online, 7, 164-177.
- This article explores and replies to Barocas and Selbst’s argument in Big Data’s Disparate Impact concerning the use of algorithmically derived models that are both predictive of a legitimate goal and have a disparate impact on some individuals. The authors agree that these models have a potential impact on antidiscrimination law but they argue for a more optimistic stance: that the law already has the doctrinal tools it needs to deal appropriately with cases of this sort.
- Hacker, P., et al. (2020). Explainable AI under contract and tort law: Legal incentives and technical challenges. Artificial Intelligence and Law, 28(4), 415–439. https://doi.org/10.1007/s10506-020-09260-6
- The authors argue that the law incentivizes the adoption of explainable AI in ways that are not always obvious. They show that explainable AI is incentivized as a way of avoiding liability despite trade-offs in accuracy through case studies in medicine and corporate acquisitions. The potential for legally mandated explainable AI in certain settings would shift how certain professionals are required to understand their legal obligations, and the extent to which they can recognize risks in advance of an adverse outcome.
- Huq, A. Z. (2020). A right to a human decision. Virginia Law Review, 106(3), 611.
- The paper explores the idea of a right to a human decision maker as a constellation to ensure that an automated decision-making process is fair and appropriate. The author finds that a better way to ensure an automated decision-making process is appropriate is to subject machine-based decisions to a right to a well-calibrated machine decision that folds in due process, privacy, and equality values.
- Karsai, K. (2020). Algorithmic decision making and issues of criminal justice—A general approach. In C. Dumitru (Ed.), In honorem Valentin Mirisan. Ganduri, studii si institutii (pp. 146-161). Universul Juridic SRL. https://papers.ssrn.com/abstract=3612106
- The author outlines basic concepts relevant to the use of algorithmic decision-making systems in criminal justice in an effort to inform legal stakeholders. The author argues that both lawyers and lawmakers must engage with the socio-legal implications of automated decision making in criminal justice because the use of data-driven technologies in criminal justice continues to expand.
- Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14-29. https://doi.org/10.1080/1369118X.2016.1154087
- This paper explores and expands upon current thinking about algorithms and considers how best to research them in practice. Concepts such as the importance of algorithms in shaping social and economic life, how they are embedded in wider socio-technical assemblages, and challenges that arise when researching algorithms are explored.
- Manes, J. (2017).* Secret law. Georgetown Law Journal, 106(3), 803-870.
- This article aims to unpack the underlying normative principles that both militate against secret law and motivate its widespread use. By investigating the tradeoff between democratic accountability, individual liberty, separation of powers, and pragmatic national security purposes created by secret law, this article proposes a systematic rubric for evaluating particular instances of secret law.
- Manes, J. (2019). Secrecy & evasion in police surveillance technology. Berkeley Technology Law Journal, 34, 503-566.
- This article examines the anti-circumvention argument for secrecy which claims that disclosure of police technologies would allow criminals to evade the law. This article then argues that this argument permits far more secrecy than it can justify, and finally proposes specific reforms to circumscribe laws that currently authorize excessive secrecy in the name of preventing evasion.
- Markovic, M. (2019). Rise of the robot lawyers. Arizona Law Review, 61(2), 325-350.
- This article argues against the claim that lawyers will be displaced by artificial intelligence on both empirical and normative grounds. This argument is developed on the following grounds: first, artificial intelligence cannot handle the abstract nature of legal tasks, and second, the legal profession has grown and benefited from technology, rather than been challenged by it. Finally, even if large-scale automation of legal work were possible, core societal values would counsel against it.
- Metcalf, J., & Crawford, K. (2016). Where are human subjects in big data research? The emerging ethics divide. Big Data & Society, 3(1). https://doi.org/10.1177/2053951716650211
- Against the background of a proposal for major revisions to the Common Rule—the primary regulation governing human-subjects research in the USA—being under consideration for the first time in decades, this article argues that data science should be understood as continuous with social sciences in regard to the stringency of the ethical regulations that govern it since the potential harms of data science research are unpredictable.
- Pasquale F. (2015).* The black box society: The secret algorithms that control money and information. Harvard University Press.
- The author explores the power of ‘hidden algorithms’. He argues that such algorithms permit self-serving and reckless behavior and how powerful interests abuse the secrecy of these algorithms for profit. Thus, transparency must be demanded of firms, such that they accept as much accountability as they impose on others.
- Pasquale, F. (2019).* A rule of persons, not machines: The limits of legal automation. George Washington Law Review, 87(1), 1-55.
- This article argues that legal automation cannot replace human legal practice as it can elude or exclude important human values, necessary improvisations, and irreducibly deliberative governance – particularly, software cannot replicate narratively intelligible communication from persons and for persons. Thus, in order to preserve accountability and a humane legal order, persons, not machines, are required in the legal profession.
- Re, R. M., & Solow-Niederman, A. (2019). Developing artificially intelligent justice. Stanford Technology Law Review, 22(2), 242-289.
- This article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large, particularly in areas where “equitable justice,” or discretionary moral judgment is most significantly exercised. In contrast, AI adjudication would promote “codified justice” which promotes standardization above discretion.
- Ross, L. D. (2021). Legal proof and statistical conjunctions. Philosophical Studies, 178. https://doi.org/10.1007/s11098-020-01521-z
- The author discusses the extent to which statistical evidence should form the basis of a legal outcome. Problematizing dominant theories which hold that statistics should not form the basis of legal verdicts, the author suggests that multiple pieces of statistical evidence ought to be admissible as reliable evidence in legal proceedings. The author concludes by suggesting that qualitative narrative evidence is more valuable than statistical evidence in courts, not because it is of a higher quality, but because it is inaccurately perceived as more reliable by the public.
- Rudin, C., et al. (2020). The age of secrecy and unfairness in recidivism prediction. Harvard Data Science Review, 2(1). https://doi.org/10.1162/99608f92.6ed64b30
- The authors suggest that debates about the use of algorithmic technologies to predict recidivism have been fruitless because of competing and contradictory definitions of fairness. Through an analysis of the COMPAS algorithm used to predict recidivism in the United States, the authors show that non-transparency has led to misinterpretations of the model and hampered informed conversations about its fairness. They argue that transparency is a requisite for procedural fairness that has been neglected in these conversations in the past and call for a simplified form of risk assessment based upon age and criminal past.
- Selbst, A. D., & Barocas, S. (2018).* The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085-1140.
- In this article, the authors aim to show what makes decisions made by algorithms seem inexplicable, by examining what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation.
- Solove, D. J. (2011). Nothing to hide: The false tradeoff between privacy and security. Yale University Press.
- The author argues against the claim that society has a duty to sacrifice privacy for security by exposing the fallacies and flaws of these claims. The author then argues that protecting privacy isn’t fatal to security measures; it merely involves adequate oversight and regulation.
- Streel, A. D., et al. (2020). Explaining the black box: When law controls AI. Centre on Regulation in Europe. http://www.crid.be/pdf/public/8578.pdf
- This report discusses the issue of AI explainability relative to the recommendations of the European High-Level Expert Group on AI and a plan set out by the European Commission in its White Paper on AI. The authors begin by outlining different legal and scientific definitions of explainability before relating them to the proposed European regulations and how they might be achieved in practice.
- Tortora, L., et al. (2020). Neuroprediction and A.I. in forensic psychiatry and criminal justice: A neurolaw perspective. Frontiers in Psychology, 11, 220. https://doi.org/10.3389/fpsyg.2020.00220
- The authors explore the potential for a host of neuro-imaging techniques powered by AI – referred to as “AI neuroprediction” – for risk assessment in criminal justice. They review academic literature on these techniques to consider the potential for their application in predicting future violence or the likelihood of rearrest. AI neuroprediction in criminal justice would have many implications for procedural fairness, as justice outcomes may be influenced by the potential occurrence of an unspecified crime at some point in the future – a clear violation of the principle.
- Watson, H. J., & Nations, C. (2019). Addressing the growing need for algorithmic transparency. Communications of the Association for Information Systems, 45(1), 488-510.
- The paper explores the concept of algorithmic transparency. It includes a review of relevant literature as well as interviews with experts in the field. The research conclusions produced a scale that considered the factors that impact an algorithm’s transparency (such as public awareness, ethical issues, legal and regulatory considerations), recommended algorithm transparency best practices, and suggested research opportunities.
Chapter 41. AI and Migration Management (Petra Molnar)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.49
- Ahmad, N. (2020). Refugees and algorithmic humanitarianism: Applying artificial intelligence to RSD procedures and immigration decisions and making global human rights obligations relevant to AI governance. International Journal on Minority and Group Rights. https://doi.org/10.1163/15718115-BJA10007
- The author argues that the introduction of AI in humanitarian work has occurred “without ethics, justice, and morality.” From a human rights perspective, the author laments that humanitarian AI has been adopted without adequate regard for individual privacy, nor for the various strategic (ab)uses of data extracted from migrant populations. The author calls for a “reprogramming” of humanitarian AI to better align with human rights norms and promote more sustainable uses of technology in the field.
- Austin, L. (2018, July 9). We must not treat data like a natural resource. The Globe and Mail. https://www.theglobeandmail.com/opinion/article-we-must-not-treat-data-like-a-natural-resource/
- In this opinion piece, the author argues that framing data transformation as a balance between economic innovation and privacy provides a narrow framework for understanding what is at stake. Not only are these values not necessarily in tension, but the focus on privacy and ownership language fails to capture implications for the public sphere, human rights, and social interests. The author proposes a better framing – one that goes beyond data as an extractable resource and recognizes data as a new informational dimension to individual and community life.
- Azizi, S., & Yektansani, K. (2020). Artificial intelligence and predicting illegal immigration to the USA. International Migration, 58(5), 183–193. https://doi.org/10.1111/imig.12695
- Noting the prevalence of irregular migration into the United States, the authors argue that it is “essential to predict whether visa applicants overstay their visas.” The authors apply machine learning techniques to a set of pre-immigration variables and claim to predict the legal status of 80 percent of Mexicans coming to the United States. This paper offers an example of how ethically and legally dubious artificial intelligence techniques can be used to discriminate against vulnerable immigrants.
- Barocas, S., & Selbst, A. D. (2016).* Big data’s disparate impact. California Law Review, 104(3), 671-732.
- This essay examines data bias concerns through the lens of American discrimination law. In light of algorithms frequently inheriting prejudices of prior decision makers and difficulties identifying the source of the bias of explaining the bias to a court, the author looks to disparate impact doctrine in laws surrounding discrimination in the workplace to identify potential remedies for the victims of data mining. The author underscores that finding a solution to Big Data’s disparate impact requires re-examining the meanings of “discrimination” and “fairness” in addition to efforts to eliminate prejudice and bias.
- Bircan, T., & Korkmaz, E.E. (2021). Big data for whose sake? Governing migration through artificial intelligence. Humanity and Social Sciences Communications 8(241). https://doi.org/10.1057/s41599-021-00910-x
- This article looks at the ways management of data impacts real life outcomes for migrants. In a clear power imbalance, private companies and government agencies have a disproportionate share of management power whereas migrants are not afforded the same. The authors argue that public-decision makers need to reconsider the power asymmetries in the current dynamic and reconsider the rules and regulations governing the practice.
- Beduschi, A. (2020). International migration management in the age of artificial intelligence. Migration Studies. https://doi.org/10.1093/migration/mnaa003
- Pointing to the early-stage use of AI in immigration and asylum determinations in Canada and Germany, the author predicts that AI will affect migration management along three primary axes: expanding power gaps between states on the world stage; modernising the migration management practices of states and international organizations; and bolstering discourses of evidence-based immigration and border management. The author concludes by warning policymakers against adopting AI technologies without understanding their legal and ethical implications.
- Benvenisti, E. (2018). Upholding democracy amid the challenges of new technology: What role for the law of global governance? European Journal of International Law, 29(1), 9-82.
- This article describes how law has evolved with the growing need for accountability of global governance bodies and analyzes why legal tools are ill-equipped to address new modalities of governance based on new information and communication technologies and automated decision making using raw data. The author argues that the law of global governance extends beyond ensuring accountability of global governance bodies and serves to protect human dignity and the viability of the democratic state.
- Carens, J. (2013). The ethics of immigration. Oxford University Press.
- This book explores how contemporary immigration issues present practical problems for Western democracies while challenging how the concepts of citizenship and belonging, rights and responsibilities, as well as freedom and equality are understood. The author uses the moral framework of liberal democracy to propose that a commitment to open borders is necessary to uphold values of freedom and equality.
- Chambers, P., & Mann, M. (2019). Crimmigration in border security? Sorting crossing through biometric identification at Australia’s international airports. In P. Billings (Ed.), Crimmigration in Australia: Law, politics, and society (pp. 381–404). Springer. https://doi.org/10.1007/978-981-13-9093-7_16
- The authors examine the use of biometric identification in Australia’s international airports. They suggest that the criminological lens of ‘crimmigration’ is not an apt way to understand the function creep of biometric technologies like fingerprint scanning and facial recognition in airports. Rather, the authors argue that the concept of surveillance capitalism reframes these practices as the displacement of liberal democratic values in favour of “surveillance and security aligned with global capitalism.”
- Côté-Boucher, K. (2020). Border frictions: Gender, generation and technology on the frontline. Routledge.
- The author describes how surveillance technologies, including artificial intelligence, have become central to managing the flow of goods and people at the Canadian border. Using ethnographic methods and policy analysis, the author explores the proliferation of surveillance technology, “the fraught circulation of data,” the role of labor unions, and the gendered and generationally inflected professional identities of border agents. In this way, the author traces a shift at the border from an economically oriented customs agency to a security-oriented police force.
- Crisp, J. (2018). Beware the notion that better data lead to better outcomes for refugees and migrants. Chatham House.
- This article explores the implications of data collection, analysis, and dissemination among states and international organizations in migration governance. The author challenges the notion that more data lead to better migration policies. The author stresses that while data collection and analysis may produce insights into migrant needs, movement patterns, and socio-economic conditions, there are important challenges related to confidentiality, information security, and the potential for abuse. They warn against the adoption of technocratic and apolitical approaches to humanitarian aid in which data collection supersedes the imperative of ensuring the humane treatment of migrants and refugees.
- Csernatoni, R. (2018). Constructing the EU’s high-tech borders: FRONTEX and dual-use drones for border management. European Security, 27(2), 175-200.
- This article examines the EU’s strategy to develop technologies such as aerial surveillance drones for border management and security. The author contends that the normalization of drone use at the border-zone embodies a host of ethical and legal implications and falls within a broader European securitized approach to migration. The author explores how this “dronisation” is presented as a technical panacea for the consequences of failed irregular migration management policies and creates further opportunities for exploitation of vulnerable migrants.
- Farraj, A. (2010). Refugees and the biometric future: The impact of biometrics on refugees and asylum seekers. Columbia Human Rights Law Review, 42(3), 891-941.
- This paper explores the impacts of biometric technologies on refugees and asylum seekers. The author surveys the various ways in which biometrics are used and explores privacy implications, comparing standards and protections laid out by U.S. and EU law. The author underscores the importance of utilizing biometrics to protect refugees and asylum seekers and that their well-being is furthered by the collection, storage, and utilization of their biometric information.
- Hall, A. (2017). Decisions at the data border: Discretion, discernment and security. Security Dialogue, 48(6), 488–504. https://doi.org/10.1177/0967010617733668
- This article focuses on how interactions between algorithms and analysts shape decisions about border security. The author draws on interviews with European data processors to argue that discretion remains “an uncertain visual practice oriented to seeing and authorizing what is there.” However, the author also shows that automation in border security upends how security institutions manage the relationship between general rules and individual judgement by prioritizing inflexible policies over the particular context of a given traveller.
- Helbing, D., et al. (2019).* Will democracy survive big data and artificial intelligence? In D. Helbing (Eds.), Towards digital enlightenment (pp. 73-98). Springer.
- This chapter examines how the “data revolution” and widespread automation of data analysis threaten to undermine core democratic values if basic rights of citizens are not protected. The authors argue that Big Data, automation, and nudging should not be used to incapacitate citizens or control behaviors, and propose various fundamental principles derived from democratic societies that should guide the use of Big Data and AI.
- Hendow, M., et al. (2015). Using technology to draw borders: Fundamental rights for the Smart Borders initiative. Journal of Information, Communication & Ethics in Society, 13(1), 39–57. https://doi.org/10.1108/JICES-02-2014-0008
- The authors examine the ethical implications of the European Union’s Smart Borders initative as it encompases issues of democracy, privacy, and surveillance. The increasing use of automated border controls (ABCs) as a means of identifying migrants and travellers, especially within the Schengen Area, is a practice rooted in the layerings of technology and not in the consideration of human rights and implications. They warn that ABCs are not the panacea solution they are made out to be, and it is essential that both the necessity and the scope of the Smart Borders proposal be strongly reconsidered.
- Islam, S.M.R., et al. (2022). Prediction of migration outcome using machine learning. In L. Troiano, A. Vaccaro, N. Kesswani, I. Díaz Rodriguez, & I. Brigui (Eds.), Progresses in artificial intelligence & robotics: Algorithms & applications. ICDLAIR 2021. Lecture Notes in Networks and Systems, vol 441. Springer. https://doi.org/10.1007/978-3-030-98531-8_17
- This source uses computational methods to understand the predictive power of data on migration outcomes. It includes discussion of which models proved insufficient, which ones improved accuracy the most, and recommendations for future research.
- Johns, F. (2017). Data, detection, and the redistribution of the sensible in international law. American Journal of International Law, 111(1), 57-103.
- This article explores how technology changes and mediates the jurisdiction of international law and international institutions such as the UNHCR. The author surveys changes in international legal and institutional work to highlight the distributive implications of automation in shaping allocations of power, competence, and capital. The author claims that technologically advanced modes of data gathering and analysis and the introduction of machine learning results in new configurations of inequality and international institutional work that fall outside the scope of existing international legal thought, doctrine, and practice.
- Jupe, L. M., & Keatley, D. A. (2020). Airport artificial intelligence can detect deception: Or am I lying? Security Journal, 33(4), 622–635. https://doi.org/10.1057/s41284-019-00204-7
- The authors argue that the use of AI lie-detectors as part of the European Union-funded iBorderCrtl initiative, which among other AI techniques relies on facial micro-expression detection for airport security, “is naïve and misinformed.” They claim the adoption of such techniques is unwarranted given a lack of empirical research demonstrating that micro-expressions are a reliable and valid method of detecting deception.
- Leese, M. (2018). Standardizing security: The business case politics of borders. Mobilities, 13(2), 261-275. DOI: 10.1080/17450101.2017.1403777
- Observing that the extensive network of the EU’s automated border controls (ABCs) are a vestige of ‘sedimented infrastructures’ (and are therefore not the result of the democratic process or of proper political agency), the author contextualizes these technological decisions as the result of the ever-increasing desire for cost efficiency and mounting public pressure to address security concerns across the Schengen Zone. However, while the overall codes and procedures are uniform, the variance in suppliers ensures that standards will be dissimilar across eGate systems, and that the resultant move to standardize this ABC equipment is representative of business-case politics and the marketization of security.
- Lepri, B., et al. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611-627.
- This article provides an overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making. The authors underscore the cruciality and urgency to engage multi-disciplinary teams of researchers, policymakers, practitioners, and citizens to co-develop and evaluate algorithmic decision-making processes designed to maximize fairness and transparency to support democracy and development.
- Liu, H. Y., & Zawieska, K. (2017).* A new human rights regime to address robotics and artificial intelligence. In 2017 Proceedings of the 20th International Legal Informatics Symposium (pp. 179-184). Oerterreichische Computer Gesellschaft.
- This paper examines how a declining human ability to control technology suggests a declining power differential and possibility of inverse power relations between humans and AI. The authors explore how this potential inversion of power impacts the protection of fundamental human rights, and they argue that the opacity of potentially harmful AI systems risks eroding rights-based responsibility and accountability mechanisms.
- Maas, M. M. (2019).* International law does not compute: Artificial intelligence and the development, displacement or destruction of the global legal order. Melbourne Journal of International Law, 20, 29-57.
- This paper draws upon techno-historical scholarship to assess the relationship between new technologies and international law. The author aims to demonstrate how new technologies change legal situations both directly, by creating new entities and enabling new behavior, as well as indirectly by shifting incentives or values. The author proposes that technically and politically disruptive features of AI threaten to destroy key areas of international law that suggests a risk of obsolescence of distinct international legal regimes.
- Magnet, S. (2011). When biometrics fail: Gender, race, and the technology of identity. Duke University Press.
- This book analyzes the state use of biometrics to control and classify vulnerable marginalized populations and track individuals beyond national territorial boundaries. The author explores cases of failed biometrics to demonstrate how these technologies work differently, and fail more often, on women, racialized populations, and people with disabilities, and stresses that these failures result from biometric technologies falsely assuming that human bodies are universal and unchanging over time.
- McAuliffe, M., et al. (2021). Digitalization and artificial intelligence in migration and mobility: Transnational implications of the COVID-19 pandemic. Societies, 11, 135. https://doi.org/10.3390/soc11040135
- This paper provides insights into how COVID-19 has altered the migration cycle through a literature review and discussion of AI technologies used in migration services. The authors assert that the COVID-19 pandemic provides a unique opportunity to intercept malpractices in the migration ecosystem and audit the technologies being used for better human rights practices for migrants and workers.
- McCarroll, E. (2019). Weapons of mass deportation: Big data and automated decision-making systems in immigration law notes. Georgetown Immigration Law Journal, 34(3), 705–732.
- This article argues that the present use of automated decision-making (ADM) systems in immigration enforcement is highly problematic under American and international law. Detailing the ongoing use of risk classifications and automated surveillance by Immigration and Customs Enforcement in the United States, the author raises concerns about discrimination, non-transparency, and political manipulation and puts forth four key policy recommendations for the legal use of ADM in immigration. The author warns that, while these practices disproportionately impact marginalized communities, they erode civil liberties on the whole.
- McGregor, L., et al. (2019). International human rights as a framework for algorithmic accountability. International and Comparative Law Quarterly, 68(2), 309-343.
- This article seeks to explore the potential human rights harms caused by the use of algorithms in decision-making. The authors analyze how international human rights law provides a framework for shared understanding and means of assessing harm while dealing with multiple actors/forms of responsibility and applies across the full algorithmic life cycle from conception to deployment.
- Molnar, P., & Gill, L. (2018). Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration and refugee system. University of Toronto’s International Human Rights Program (IHRP) at the Faculty of Law and the Citizen Lab at the Munk School of Global Affairs and Public Policy, with support from the IT3 Lab at the University of Toronto. https://it3.utoronto.ca/wp-content/uploads/2018/10/20180926-IHRP-Automated-Systems-Report-Web.pdf
- This report highlights the human rights implications of using algorithmic and automated technologies for administrative decision-making in Canada’s immigration and refugee system. The authors survey current and proposed uses of automated decision-making, illustrate how decisions may be affected by new technologies, and develop a human rights analysis from domestic and international perspectives. They outline several policy challenges related to the adoption of these technologies and present a series of policy recommendations for the federal government.
- Mukherjee, S., et al. (2020). Immigration document classification and automated response generation. In 2020 International Conference on Data Mining Workshops (ICDMW) (pp. 782-789). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/ICDMW51313.2020.00114
- This paper addresses the problem of repetitive manual information processing in American immigration applications. The authors apply several image and text classifier algorithms to automatically categorize application supporting documents and evidence, while ensuring a robust human review process. They argue that their method can significantly reduce application processing time without major sacrifices in accuracy.
- Noori, S. (2021). Suspicious infrastructures: Automating border control and the multiplication of mistrust through biometric e-gates. Geopolitics, 1–23. https://doi.org/10.1080/14650045.2021.1952183
- The author analyzes the recent frenzy of implementing eGate and automatic border controls (ABCs) at European border crossing points and argues that they are representative of three ‘modes of suspicion’: the identity of the traveller or migrant, the manual labour of border-security guards, and the ABC itself. The first form of mistrust is aimed specifically towards asylum-seekers and immigrants, allowing human-based measures of control more time to deal with these cases while the ABC infrastructure allows ‘traditional’ passengers a smoother and faster border-crossing experience. This form of ‘automatic-scrutiny’ has reshaped notions of trust, where ABCs assume that artefacts of transport (such as passports) are suspect.
- Raymond, et al. (2016). Building data responsibility into humanitarian action. OCHA Policy and Studies Series. https://ssrn.com/abstract=3141479
- This paper explores the risks and challenges for collecting, analyzing, aggregating, sharing, and using data for humanitarian projects including handling sensitive data and bias and discrimination. By drawing on case studies of data-driven initiatives across the globe, the authors identify the critical issues humanitarians face as they use data in operations, and propose an initial framework for data responsibility.
- Sánchez-Monedero, J., & Dencik, L. (2022). The politics of deceptive borders: “Biomarkers of deceit” and the case of iBorderCtrl. Information, Communication & Society, 25(3), 413–430. https://doi.org/10.1080/1369118X.2020.1792530
- Entering into a critical discussion of the iBorderCtrl proposed border-control system for the EU, the authors dissect the particularities of the emotional AI/facial recognition system. Observing that the proposed system intends to measure facial micro-expressions (termed ‘biomarkers of deceit’) at EU border crossings, they argue that such a system would likely not work in practice and represents intense potential for the violation of fundamental human rights as systems of this kind serve not technical, but rather political functions, contrary to their design and intentions.
- Staton, B. (2016). Eye spy: Biometric aid system trials in Jordan. The New Humanitarian. https://www.thenewhumanitarian.org/analysis/2016/05/18/eye-spy-biometric-aid-system-trials-jordan
- This article explores the use of biometric iris scanners in Syrian refugee camps in Azraq, Jordan. Through interviews with the technology’s developers, users, and advocacy groups, the author outlines the proposed practical and security benefits of the technology as well as refugees’ concerns surrounding privacy, possibility of abuses and data error, and effects on health and wellbeing. The author acknowledges the rapidly growing adoption of technology in humanitarian aid and places biometric iris scanning technology in broader debates surrounding responsible data use and protecting vulnerable populations from potential harm.
- Tingzon, I., et al. (2020). Mapping new informal settlements using machine learning and time series satellite images: An application in the Venezuelan migration crisis. In 2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G), (pp. 198–203). Institute of Electrical and ElectronicsEngineers. https://doi.org/10.1109/AI4G50087.2020.9311041
- This conference paper presents a machine learning method for monitoring migration patterns to assist state and non-governmental humanitarian efforts. Using the case of out-migration from Venezuela into Colombia, the authors demonstrate that they can partially automate the detection of informal settlements with a random forest classifier and time-series satellite imagery and verify predictions with Google Earth and a mobile crowd-sourcing app. They argue that this method can help efficiently deploy resources to populations in need.
- Vavoula, N. (2021). Artificial intelligence (AI) at Schengen borders: Automated processing, algorithmic profiling and facial recognition in the era of techno-solutionism. European Journal of Migration and Law, 23(4), 457–484. https://doi.org/10.1163/15718166-12340114
- Examining how EU/Schengen area legal frameworks have in recent years embedded practices of automatic analysis through artificial intelligence tools for monitoring third-country nationals (TCNs), the author contextualizes these technologies in an era of how TCNs are considered security risks by default, which often results in algorithmic profiling and the heavy reliance on biomet ric technologies, including facial recognition systems. This ‘datafication of mobility’, the author argues, represents the first in a series of human-rights violations (the right to privacy) and is a gateway for further violations, such as freedom from discrimination and the right to effective and appropriate legal recourse.
Chapter 42. Robot Teaching, Pedagogy, And Policy (Elana Zeide)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.51
- Bradbury, A., & Roberts-Holmes, G. (2017). The datafication of primary and early years education: Playing with numbers. Routledge.
- This book analyzes the trend of increased data use in schools, particularly within early childhood education to explore the impact of its use in ‘data-obsessed’ schools. Using case studies and both sociological and post-foundational frameworks, the authors argue that new teacher and student subjectivities are created while reducing the complexity of children’s learning.
- Bradbury, A. (2019). Datafied at four: The role of data in the ‘schoolification’ of early childhood education in England. Learning, Media and Technology, 44(1), 7-21. https://doi.org/10.1080/17439884.2018.1511577
- This article examines the impact of datafication on children from birth to age five in England, arguing that nurseries and schools are subjected to demands from data, creating new subjectivities which have led to the prioritization of measurement over learning.
- Chen, X., et al. (2020). Application and theory gaps during the rise of artificial intelligence in education. Computers and Education: Artificial Intelligence, 1. https://doi.org/10.1016/j.caeai.2020.100002
- This paper conducts a review of 45 influential papers using AI in education. The authors find that although interest in using AI in education has increased with time, relatively few approaches leverage deep learning algorithms, which are used with success in other domains. They conclude by discussing shortcomings of existing research that need to be addressed to make progress that is useful in classrooms.
- Dignum, V. (2021). The role and challenges of education for responsible AI. London Review of Education, 19(1), 1-11. https://doi.org/10.14324/LRE.19.1.01
- The article presents a responsible, trustworthy vision for AI and how it relates to and affects education. The author discusses and summarizes several ethical issues relating to general AI systems and those used for education. They suggest guidelines and regulatory frameworks for AI to ensure responsible AI.
- Edwards, R. (2015).* Software and the hidden curriculum in digital education. Pedagogy, Culture & Society, 23(2), 265–79. https://doi.org/10.1080/14681366.2014.977809
- This article challenges the positioning of emerging technologies as mere tools to enhance teaching and learning by highlighting the ways in which these technologies shape curriculum and limit modes of interaction between teachers and students.
- Fenwick, T., & Edwards, R. (2016). Exploring the impact of digital technologies on professional responsibilities and education. European Educational Research Journal, 15(1), 117-131. https://doi.org/10.1177%2F1474904115608387
- This article examines how new digital technologies are impacting the relationship between professionals and their clients, users, and students. As a result, new forms of accountability and responsibility have emerged.
- Gourlay, L. (2021). There is no ‘virtual learning’: The materiality of digital education. Journal of New Approaches in Educational Research, 9(2), 57-66. https://doi.org/10.7821/naer.2021.1.649
- Adopting a sociomaterial perspective, the author argues that the notion of ‘virtual learning’ is inherently flawed, despite many educators combining the concept of face-to-face instruction to create what is known as ‘blended learning.’ The author contends that virtual learning is more complicated than many acknowledge, and it is actually grounded in materiality.
- Gulson, K. N., & Sellar, S. (2019). Emerging data infrastructures and the new topologies of education policy. Environment and Planning D: Society and Space, 37(2), 350-366. https://doi.org/10.1177%2F0263775818813144
- This article argues that datafication in educational policy is creating new topologies. The authors outline a case study of an emergent data infrastructure in Australian schooling called the National Schools Interoperability Program. The study is used to provide empirical evidence of the movement, connection, and enactment of digital data across policy spaces, including the ways that data infrastructure is: (i) enabling new private and public connections across policy topologies; (ii) creating a new role for technical standards in education policy; and (iii) changing the topological spaces of education governance.
- Hadi Mogavi, R., et al. (2021). Characterizing student engagement moods for dropout prediction in question pool websites. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-22. https://doi.org/10.1145/3449086
- This paper characterizes different types of students on problem-based learning websites with question pools such as LeetCode, Code Chef, and Math Playground. The authors train a machine learning model and identify five primary modes of engagement among students: challenge-seeker, subject-seeker, interest-seeker, joy-seeker, and non-seeker. They describe the characteristics of each mode and develop models to predict the likelihood a student will quit a given program. Finally, they offer solutions to reduce the likelihood of students dropping out from question pool websites.
- Hartong, S., & Förschler, A. (2019). Opening the black box of data-based school monitoring: Data infrastructures, flows and practices in state education agencies. Big Data & Society, 6(1). https://doi.org/10.1177%2F2053951719853311
- This article examines digital data infrastructures in state education agencies, considering the role of school monitoring. They argue that the rise of digital technologies creates new capabilities and powers and suggest that teachers should be given more information on these tools.
- Herold, B., & Molnar, M. (2018, November 6).* Are companies overselling personalized learning? Education Week. https://www.edweek.org/technology/are-companies-overselling-personalized-learning/2018/11
- This article critiques the use of the term “personalized learning” as it has no set definition and can refer to a variety of pedagogical strategies. Instead, the term has been used as a marketing tool for companies looking to sell their products to educators.
- Herold, B. (2018, November 7).* What does personalized learning mean? Whatever people want it to. Education Week. https://www.edweek.org/ew/articles/2018/11/07/what-does-personalized-learning-mean-whatever-people.html
- This article critiques the variety of definitions applied to personalized learning, arguing that loose definitions can result in incoherent policy and ineffective educational outcomes.
- Hood, N. (2018). Re-imagining the nature of (student-focused) learning through digital technology. Policy Futures in Education, 16(3), 321-326.
- This paper explores some of the questions about the role of AI in education and learning. In particular, the article examines issues of equity and social justice, what it means to design educational and learning experiences that are truly student-focused, and the potential for technology to dehumanize the learning process.
- Hossain, S. F., et al. (2021). Exploring the role of AI in K12: Are robot teachers taking over? In I. Jaafar & J. M. Pedersen (Eds.), Emerging Realities and the Future of Technology in the Classroom (pp. 120-135). IGI Global. https://www.irma-international.org/chapter/exploring-the-role-of-ai-in-k12/275651/
- This chapter summarizes a focus group interview that was conducted to study the role of artificial intelligence (AI) in K-12 education systems. The focus group uncovered how traditional learning methods have transformed due to factors like the COVID-19 pandemic, and the role of AI in these systems has received more scholarly attention than ever before. The authors also draw attention to the use of AI-enhanced teaching and how it ensures sustainable educational development.
- Jones, K., et al. (2021). Do they even care? Measuring instructor value of student privacy in the context of learning analytics. 54th Hawaii International Conference on System Sciences. https://doi.org/10.24251/hicss.2021.185
- This presentation examines the increasingly large role that learning analytics tools play in educational systems. The authors argue that despite faculty, staff, and students being concerned about privacy in their personal lives, it is unclear whether these groups prioritize their privacy in the educational setting.
- Kross, S., et al. (2021). Characterizing the online learning landscape: What and how people learn online. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-19.
- This article conducts a study on how over two thousand adults representative of U.S demographics learn online. The authors seek to understand what and how the study participants learn and whether there are shared core experiences. They find that YouTube is the most popular method for learning online, which may raise concerns due to the ethics of algorithm recommendation systems. Participants also demonstrate greater interest in free, interactive resources tailored to their needs rather than traditional resources if they are available.
- Landri, P. (2018). Digital governance of education: Technology, standards and Europeanization of education. Bloomsbury Publishing.
- Adopting a sociomaterial approach to education policy, this book explores how datafication impacts the experience of education. Landri argues that this datafication has drastic effects on how education systems are organized and managed, including the standardization of education and transparency in educational practices.
- Lindh, M., & Nolin, J. (2016). Information we collect: Surveillance and privacy in the implementation of Google apps for education. European Educational Research Journal, 15(6), 644-663. https://doi.org/10.1177%2F1474904116654917
- This study conducted in a Swedish school organization argues that Google’s business model for online marketing is embedded in its educational tools, Google Apps for Education (GAFE). By making a distinction between (your) ‘data’ and (collected) ‘information’, Google can disguise the presence of its business model.
- McStay, A. (2019). Emotional AI and EdTech: Serving the public good? Learning, Media and Technology, 45(3), 270–283. https://doi.org/10.1080/17439884.2020.1686016
- This article examines the role of education technology companies in employing AI to quantify emotional learning in classrooms. The author argues that these forms of technology raise important concerns about the methodology and material effects on students, and the ethical and legal risks of their deployment in education.
- Murphy, R. F. (2019).* Artificial intelligence applications to support K-12 teacher and teaching: A review of promising applications, challenges, and risks. RAND Corporation. https://www.rand.org/pubs/perspectives/PE315.html
- This article explores how AI can be used to support K-12 teachers by assisting them with tasks rather than outright replacing them. Examined systems include intelligent tutoring, automated essay grading, and early warning protocols. Technical challenges of these systems are also discussed.
- Office of Education Technology, U.S. Department of Education. (2017, January 18).* What is personalized learning? Personalizing the learning experience: Insights from future ready schools. Medium. https://medium.com/personalizing-the-learning-experience-insights/what-is-personalized-learning-bc874799b6f
- This article presents the argument that the lack of a detailed definition for the term “personalized learning” has created problems for understanding the concept and for implementing personalized learning curriculum, which is defined as the adjustment of the pace of learning to meet the needs of individual students.
- Pinkwart, N., & Liu, S. (Eds.). (2020). Artificial intelligence supported educational technologies. Springer.
- This book compiles discussion and research from German and Chinese experts in pedagogy, computer science, and technology, who met in a 2019 symposium “The Sino- German Perspective on AI-Driven Educational Technology.” It discusses different strategies for improving student learning efficacy. Research details on the underlying educational AI systems and algorithms are presented. Finally, the book provides empirical case studies utilizing these systems.
- Pearson & EdSurge. (2016).* Decoding adaptive. http://d3btwko586hcvj.cloudfront.net/static_assets/PearsonDecodingAdaptiveWeb.pdf
- This report investigates three questions. First, what is adaptive learning? Second, what is inside the “black box” of adaptive learning?” Third, how do adaptive learning tools on the market differ? It is vital that these questions are answered if these technologies are to improve teaching and learning in significant ways.
- Pedro, F., et al. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development. UNESCO. https://www.gcedclearinghouse.org/sites/default/files/resources/190175eng.pdf
- This report draws attention to the use of AI technology in developing countries and offers suggestions for how AI can be utilized to improve education policy. The authors note that policymakers also suggest six important recommendations for policymakers implementing AI within educational systems, including the development of fair and equal policy, ensuring equity, training teachers, developing inclusive data systems, studying the impacts of AI in education, and increasing transparency in data collection.
- Popenici, S. A., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1). https://doi.org/10.1186/s41039-017-0062-8
- This paper explores the emergence of the use of artificial intelligence in teaching and learning within the higher education system. The authors examine the implications of these technologies as they continue to evolve, while also pointing out challenges institutions may face when adopting them on teaching, learning, student support, and administration.
- Radu, I., et al. (2021). Unequal impacts of augmented reality on learning and collaboration during robot programming with peers. In Proceedings of the ACM on Human-Computer Interaction, 4(CSCW3), 1-23.
- This paper studies an interactive collaboration problem, where students have augmented reality (AR) as an aid for a pair robot programming assignment. They found that the overall learning progress was twice as high with the use of augmented reality, as the students received rapid feedback on progress. They find that augmented reality generally favors one participant over the other, based on proximity in the real world. Finally, the paper discusses implications for the design of future AR interfaces in education.
- Regan, P. M., & Jesse, J. (2018). Ethical challenges of EdTech, big data and personalized learning: Twenty-first century student sorting and tracking. Ethics and Information Technology, 21(3), 167–179. https://doi.org/10.1007/s10676-018-9492-2
- This paper analyzes ethical concerns surrounding the use of education technology, and in particular, AI designed to create personalized learning profiles. The authors argue that characterizing these concerns under the general rubric of ‘privacy’ oversimplifies the issue and makes it too easy for advocates to dismiss or minimize them. Instead, the authors identify six additional ethical concerns: information privacy, anonymity, surveillance, autonomy, non-discrimination, and ownership of information.
- Selwyn, N. (2016).* Is technology good for education? John Wiley & Sons.
- This book challenges the notion that rapid digitalization of education is net positive, arguing that we should question who stands to gain from this digitalization and what is lost when educators convert to these methods.
- Seo, K., et al. (2021). The impact of artificial intelligence on learner–instructor interaction in online learning. International Journal of Educational Technology in Higher Education, 18(1), 1-23.
- This article examines how teachers and students perceive the impact of AI technologies on learner-instructor interaction (e.g. communication, support, and presence) in online learning. The authors found that AI systems successfully improved the quality of communication by providing just-in-time, personalized support for many students. However, several concerns regarding responsibility and surveillance persist and need to be addressed.
- Shah, D., et al. (2021). Exploiting the capabilities of blockchain and machine learning in education. Augmented Human Research, 6(1), 1-14.
- The article analyzes the effectiveness of AI education when combined with blockchain technology, where it is possible to store the results securely. The paper proposes various ways to combine blockchain and machine learning technologies to benefit the educational field.
- Wang, F. L., et al. (2010). Handbook of research on hybrid learning models: Advanced tools, technologies, and applications. Information Science Reference.
- This book, through the lens of numerous contributors, examines various hybrid learning models that are used in educational systems today. The central argument of this book is that face-to-face instruction is the most efficient way of teaching, and that technology should never be the sole factor driving educational systems.
- Watters, A. (2017, June 9).* The histories of personalized e-learning. Hackeducation. http://hackeducation.com/2017/06/09/personalization
- This article asserts that emerging technology in education is not an entirely new phenomenon by providing the history of personalized learning that spans over decades.
- Williamson, B. (2018).* The hidden architecture of higher education: Building a big data infrastructure for the ‘smarter university.’ International Journal of Educational Technology in Higher Education, 15(1). https://doi.org/10.1186/s41239-018-0094-1
- This article examines a major data infrastructure project in Higher Education within the United Kingdom, observing how the program imagines the ideal of the “smart university” while also leading to reforms through marketization.
- Williamson, B. (2016). Digital education governance: Data visualization, predictive analytics, and ‘real-time’ policy instruments. Journal of Education Policy, 31(2), 123-141. https://doi.org/10.1080/02680939.2015.1035758
- This article maps new kinds of digital policy instruments in education. It provides two case studies on new digital data systems: The Learning Curve from Pearson Education and learning analysis platforms that track student performance using their digital date to predict outcomes. The author finds that third-party companies have a domineering effect and that this has led to a data-driven style of governing within education.
- Williamson, B. (2016). Digital education governance: An introduction. Sage Journals, 15(1), 3-13. https://doi.org/10.1177%2F1474904115616630
- This article seeks to explain how digital technology has changed numerous trends within educational policy. This includes the phenomena of governing through data, the globalization of educational policy, accountability, global comparison, and benchmarking within the framework of emerging local, national, and international goals.
- Wilson, A., et al. (2017). Learning analytics: Challenges and limitations. Teaching in Higher Education, 22(8), 991-1007. https://doi.org/10.1080/13562517.2017.1332026
- This article raises concerns about the increased use of learning analytics in higher education for adults, laying out potential problems. The authors posit their own analytic framework that is based in sociometrical pedagogy.
- Zawacki-Richter, O., et al. (2019). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education, 16(1). https://doi.org/10.1186/s41239-019-0171-0
- This article highlights the lack of critical examination from scholars on the impact of AI on higher education. The authors argue that most papers on AIEd come mainly from Computer Science and STEM fields, leaving a gap in the exploration of this issue from ethical and educational perspectives. The article presents four areas of AIEd applications in academic support, institutional, and administrative services: (1) profiling and prediction, (2) assessment and evaluation, (3) adaptive systems and personalization, and (4) intelligent tutoring systems.
- Zeide, E. (2017).* The structural consequences of big data-driven education. Big Data, 5(2), 164–72. https://doi.org/10.1089/big.2016.0061
- This article examines how data-driven tools change how schools make pedagogical decisions, fundamentally changing aspects of the education enterprise in the United States.
- Zhang, K., & Aslan, A. B. (2021). AI technologies for education: Recent research & future directions. Computers and Education: Artificial Intelligence, 2. https://doi.org/10.1016/j.caeai.2021.100025
- The article provides a broad overview of empirical studies on AI in education (AIEd) and summarizes the current state of AIEd research. The authors argue that the recent work on AI sufficiently proves the potential benefits for education and provides practical guides on creating AIEd technologies. However, they point out that AI ethics and privacy concerns still need development from interdisciplinary collaborations to be employed in practical settings.
- The article provides a broad overview of empirical studies on AI in education (AIEd) and summarizes the current state of AIEd research. The authors argue that the recent work on AI sufficiently proves the potential benefits for education and provides practical guides on creating AIEd technologies. However, they point out that AI ethics and privacy concerns still need development from interdisciplinary collaborations to be employed in practical settings.
- Zheng, L., et al. (2021). The effectiveness of artificial intelligence on learning achievement and learning perception: A meta-analysis. Interactive Learning Environments. https://doi.org/10.1080/10494820.2021.2015693
- The article conducts a quantitative meta-analysis to examine the effectiveness of AI on learning achievement and learning perception. The authors find that AI has a high effect size on learning achievement and small effect size on learning perception. They also find that the sample size, sample level, learning domains, types of organization, roles of AI, and hardware affect the effectiveness of AI education.
Chapter 43. Algorithms and the Social Organization of Work (Ifeoma Ajunwa and Rachel Schlund)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.52
- Afnan, T., et al. (2021). Asymmetries in online job-seeking: A case study of Muslim-American women. Proceedings of the ACM Conference on Human-Computer Interaction, 5(CSCW2), 1-29.
- This article investigates modern hiring discrimination among Muslim-American women, who have historically faced more challenges in securing employment. To analyze the effect of digital hiring tools, the authors conduct and analyze 20 interviews with Muslim-American women who have used online job platforms in the past two years. They identify three primary asymmetries, in process, information, and legacy that these women face and then discuss solutions for a more equitable hiring process.
- AI Now Institute. (2018). Algorithmic Accountability Policy Toolkit. https://ainowinstitute.org/aap-toolkit.pdf
- This policy toolkit was created by the AI Now Institute to disseminate information on the use of algorithms by governments. It presents general information about what algorithms are, how they are created, and how they work. It also includes resources for advocates, literature reviews on relevant topics, and examples of areas where AI systems have been implemented.
- Ajunwa, I., et al. (2016). Health and big data: An ethical framework for health information collection by corporate wellness programs. The Journal of Law, Medicine & Ethics, 44(3), 474-480. https://doi.org/10.1177%2F1073110516667943
- This essay discusses the manner in which data collection is being used in wellness programs and the potential negative impact on the worker, regarding privacy and employment discrimination. The authors argue that ethical issues can be addressed by committing to the ethical principles of informed consent, accountability, and fair use of personal data.” Furthermore, innovative approaches to wellness are offered that might allow for healthcare cost reduction.
- Ajunwa, I. (2018).* Algorithms at work: Productivity monitoring applications and wearable technology as the new data-centric research agenda for employment and labor law. Saint Louis University Law Journal, 63(1), 21-54.
- This article argues that the emergence of productivity monitoring applications and wearable technologies will lead to new legal issues for employment and labor law. These issues include concerns over privacy, unlawful employment discrimination, worker safety, and workers’ compensation. The author argues that the emergence of productivity monitoring applications will result in a conflict between the employer’s pecuniary interests and the privacy interests of the employees. They end by discussing future research for privacy law scholars in dealing with employee privacy and the collection and use of employee data.
- Ajunwa, I. (2019).* Age discrimination by platforms. Berkeley Journal of Employment and Labor Law, 40(1), 1-28.
- This article examines how platforms in the workplace might enable, facilitate, or contribute to age discrimination in employment. The author discusses the legal difficulties in dealing with such practices, namely, meeting the burden of proof and assigning liability in cases where the platform acts as an intermediary. The author proceeds by offering a three-part proposal to combat the age discrimination that accompanies platform authoritarianism.
- Ajunwa, I. (2020).* The paradox of automation as anti-bias intervention. Cardozo Law Review, 41(5), 1671-1742.
- This article rejects the mistaken understanding of algorithmic bias as a technical issue. Instead, the author argues that the introduction of bias in the hiring process derives largely in part from an American legal tradition of deference to employers. The author discusses novel approaches that might be used to make employers and designers of algorithmic hiring systems liable for employment discrimination. In particular, the author offers the doctrine of discrimination per se, which interprets an employer’s failure to audit and correct automated hiring platforms for disparate impact as prima facie evidence of discriminatory intent.
- Ajunwa, I., & Greene, D. (2019).* Platforms at work: Automated hiring platforms and other new intermediaries in the organization of work. Research in the Sociology of Work, 33(1), 61-91.
- This chapter discusses how tools provided by the sociology of work might be used to study work platforms, such as automated hiring platforms. The authors highlight five core affordances that work platforms offer employers and discuss how they combine to create a managerial frame in which workers are viewed as fungible human capital. Focus is given to the coercive nature of work platforms and the asymmetrical flow of information that favors the interests of employers.
- Boulding, W., et al. (2005).* A customer relationship management roadmap: What is known, potential pitfalls, and where to go. Journal of Marketing, 69(4), 155-166. https://doi.org/10.1509%2Fjmkg.2005.69.4.155
- This article asserts that customer relationship management (CRM) is the result of the “continuing evolution and integration of marketing ideas and newly available data, technologies, and organizational forms…” The authors predict that CRM will continue to evolve as new ideas and technologies are incorporated into CRM activities. They discuss what is known about CRM, the potential pitfalls and unknowns faced by its implementation, and offer recommendations for further research.
- Brown, E. A. (2016). The Fitbit fault line: Two proposals to protect health and fitness data at work. Yale Journal of Health Policy, Law and Ethics, 16(1), 1-50.
- This article argues that federal law does not adequately protect employees’ health and fitness data from potential misuse; moreover, employers are incentivized to use such data when making significant decisions, such as hiring and promotions. The author offers two remedies for the improper use of health and fitness data. First, the enactment and enforcement by the Federal Trade Commission of a mandatory privacy labeling law for health-related devices and apps would improve employee control over their health data. Second, the Health Insurance Portability and Accountability Act of 1996 can extend its protections to the health-related data that employers may acquire about their employees.
- Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512
- The author considers the opacity of machine learning algorithms as a problem for consequential mechanisms of classification and ranking, e.g., spam filters and search engines. The author identifies three types of opacity: opacity resulting from intentional corporate or state secrecy, technical illiteracy, or the characteristics of machine learning algorithms. They conclude by arguing that identifying these types of opacity is necessary for effective technical and non-technical solutions to be introduced.
- Chen, L., et al. (2018). Investigating the impact of gender on rank in resume search engines. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-14). Association for Computing Machinery. https://doi.org/10.1145/3173574.3174225
- The authors examine gender-based inequalities in the context of resume search engines, understood as tools that allow recruiters to proactively search for candidates based on keywords and filters. They focus on the ranking algorithms used by three major hiring websites: Indeed, Monster, and CareerBuilder. Their examination concludes that the ranking algorithms used by all three hiring sites omit candidates’ inferred gender as a feature, but there was demonstrated unfairness against feminine candidates in roughly a third of the job titles examined.
- Chung, C. F., et al. (2017). Finding the right fit: Understanding health tracking in workplace wellness programs. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 4875-4886). Association for Computing Machinery. https://doi.org/10.1145/3025453.3025510
- This paper uses empirical data to gain an understanding of employee experiences and attitudes towards health tracking in workplace health and wellness programs. The authors find that employees are concerned predominantly with program fit rather than privacy. The authors also highlight a gap between a holistic understanding of health and the easily measurable features with which workplace programs are concerned.
- Citron, D., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1-34.
- Predictive algorithms use data to rank and rate individuals. This article argues that overseeing such systems should be a critical aim of the legal system. Certain protections need to be implemented, such as allowing regulators to test scoring systems to ensure fairness and accuracy, and providing individuals an opportunity to challenge decisions based on scores that mischaracterize them. The authors argue that absent such protections, the adoption of predictive algorithms risks producing stigmatizing scores on the basis of biased data.
- Danna, A., & Gandy, O. H. (2002). All that glitters is not gold: Digging beneath the surface of data mining. Journal of Business Ethics, 40(4), 373-386. https://doi.org/10.1023/A:1020845814009
- This article examines the manner in which data mining technologies are applied in the market and the social concerns that arise in response to the application of such technologies in the public and private sectors. The authors argue that “at the very least, consumers should be informed of the ways in which information about them will be used to determine the opportunities, prices, and levels of service they can expect to enjoy in their future relations with a firm.” The authors offer the Kantian principle of “universal acceptability” and the Rawlsian principles of special regard for those who are least advantaged to guide the development of data mining and consumer profiles.
- Delfanti, A. (2021). The warehouse: Workers and robots at Amazon. Pluto Press.
- This book examines the Amazon warehouse as a site of labor that has been irrevocably shaped by novel technological advancements and the often oppressive managerial techniques they afford. By contrasting current technologies, such as robotics and algorithmic systems, alongside speculative developments (evidenced through Amazon’s patents), the author demonstrates the warehouse’s imperative to standardize, measure, and discipline human work rather than replace it. They contrast Amazon’s continuing reliance on this low cost labor with attempts at unionization and resistance.
- De Stefano, V. (2020). Algorithmic bosses and how to tame them. C4eJournal: Perspectives on Ethics, The Future of Work in the Age of Automation and AI Symposium. [2020 C4eJ 52] [20 eAIj 12].
- The author traces the history of management in the workplace, from Taylorism to the arrival of algorithmic management. The author then surveys recent developments in the regulation of algorithmic management. They argue that the arrival of algorithmic management reveals that the current development of ethical principles for AI has not been appropriately focused on issues related to work and employment. The author suggests turning to current human rights frameworks, which already focus on the rights of workers, to inform the development of ethical principles and AI technologies.
- Fort, T. L., et al. (2016). The angel on your shoulder: Prompting employees to do the right thing through the use of wearables. Northwestern Journal of Technology and Intellectual Property, 14(2), 139-170.
- This article examines the use of wearables as personal information gathering devices that feed into larger data sets. The authors argue that cybersecurity and privacy guidelines, such as those offered by the European Data Protection Supervisor and the 2014 National Institute of Standards and Technology Cybersecurity Framework, should be implemented from the bottom-up in order to regulate the use of personal data.
- Georgiou, K., & Nikolaou, I. (2020). Are applicants in favor of traditional or gamified assessment methods? Exploring applicant reactions towards a gamified selection method. Computers in Human Behavior, 109, 106356.
- Gamification and data analysis are an emerging trend for making hiring decisions. This study gave three hundred information technology company employees job assessments and surveyed their perceptions of gamified assessment methods, on qualities such as overall satisfaction and fairness. The authors find that applicants have an increase in process satisfaction but an unchanged perception of predictive validity.
- Gilliom, J., & Monahan, T. (2012). Watching you work. In SuperVision: An introduction to the surveillance society (pp. 89-107). The University of Chicago Press.
- This chapter positions workplace surveillance as the new normal for most vocations. The authors provide a historical review to contextualize these practices of monitoring and disciplining employees, tracing a through line from the Taylorist and Fordist ideas of efficiency that rearranged manufacturing and industrial settings to the keystroke loggers and performance monitoring systems that color modern professional work. Specific case studies accompany this overview, drawing links between acts of surveillance in nursing administration, to casino management, to white-collar cubicle work.
- Greenbaum, J. M. (2004).* Windows on the workplace: Technology, jobs and the organization of office work (2nd ed.). Monthly Review Press.
- This book discusses the changes that occurred from the 1950’s to the present in management policies, work organization, and the design of office information systems. Focusing on the experiences of office workers, the author highlights the manner in which technologies have been used by employers to increase profits and gain control over workers.
- Greenbaum, D. (2016). Ethical, legal and social concerns relating to exoskeletons. ACM SIGCAS Computers and Society, 45(3), 234-239.
- This paper provides an overview of the issues surrounding the emergence of exoskeletons. The author aims to “provide anticipatory expert opinion that can provide regulatory and legal support for this technology, and perhaps even course-correction if necessary, before the technology becomes ingrained in society.”
- Hardy, K., & Barbagallo, C. (2021). Hustling the platform: Capitalist experiments and resistance in the digital sex industry. The South Atlantic Quarterly, 120(3), 533-551. https://doi.org/10.1215/00382876-9154898
- As an increasing share of sex work in the United Kingdom becomes digitally mediated, the authors note that the new forms control platformization permits over sex workers has largely been neglected in the literature. The authors examine AdultWork, the dominant platform for sex work within this region, and how its implementation of algorithms and interfaces have driven down standards and prices and normalized high-risk behaviours. The authors use examples of resistance and collective organization by sex workers to suggest wider strategies of labor transformation in platform work.
- Hull, G., & Pasquale, F. (2018). Toward a critical theory of corporate wellness. BioSocieties, 13(1), 190-212. https://doi.org/10.1057/s41292-017-0064-1
- Employee wellness programs aim to incentivize and supervise healthy employee behaviors; however, there is little evidence that such programs increase productivity or profit. This article analyzes employee wellness programs as “providing an opportunity for employers to exercise increasing control over their employees.” The authors conclude by arguing that a renewed commitment to public health programs occluded by the private sector’s focus on wellness programs would constitute a better investment of resources.
- Jahanbakhsh, F., et al. (2020). An experimental study of bias in platform worker ratings: The role of performance quality and gender. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376860
- This paper presents the results of a study on the use of performance ratings in online labor platforms. The authors use variables such as gender (for both the worker and the rater) to compare how workers are rated by different users, as well as how workers are rated in comparison to each other. The authors found that low-performing female workers were rated lower than their male counterparts, and that high performing workers of all genders received significantly higher ratings than low-performing ones.
- Kellogg, K. C., et al. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410. https://doi.org/10.5465/annals.2018.0174
- The authors of this research article propose a “6 Rs” framework for studying mechanisms of control in the workplace. These Rs are restricting and recommending to direct workers, recording and rating to evaluate them, and replacing and rewarding to discipline them. The authors also provide a literature review of labor process theory, algorithmic capabilities, the impacts of algorithms in the workplace, and examples of worker resistance.
- Kim, P., & Scott, S. (2019). Discrimination in online employment recruiting. St. Louis University Law Journal, 63(1), 93-118.
- This article examines the question of when employers should be liable for discrimination based on their online recruiting strategies. The authors discuss the extent to which existing law can address concerns over discriminatory advertising, and they note the often-overlooked provisions forbidding discriminatory advertising practices found in Title VII of the Civil Rights Act of 1964 and the Age Discrimination in Employment Act. The authors conclude that existing doctrine is suited to address highly problematic advertising practices; however, it remains uncertain the extent to which current law can address all practices with discriminatory effects.
- Mateescu, A., & Ticona, J. (2020). Invisible work, visible workers: Visibility regimes in online platforms for domestic work. In D. D. Acevedo (Ed.), Beyond the algorithm: Qualitative insights for gig work regulation (pp. 57-81). Cambridge University Press. https://doi.org/10.1017/9781108767910
- Although caregiving work has often been considered impenetrable to automation and technological optimization, this article examines the trend towards migrating such domestic work into the platform economy. With a focus on shifting demographics and labor practices in the United States over the past half-century, the authors document how nannies, house cleaners, and eldercare workers have increasingly relied on digital technologies to find work. The historic invisibility of these highly gendered and racialized forms of labor is put into conversation with the precarious visibility regimes inherent to these platforms.
- Nissenbaum, H., & Patterson, H. (2016).* Biosensing in context: Health privacy in a connected world. In D. Nafus (Ed.), Quantified: Biosensing technologies in everyday life (pp. 79-100). MIT Press.
- The emergence of novel information flows that accompany new health self-tracking practices create vulnerabilities for individual users and society. The authors argue that such vulnerabilities implicate privacy. Consequently, the authors contend that information flows that accompany new health self-tracking practices “are best evaluated according to the ends, purposes, and values of the contexts in which they are embedded.”
- Pasquale, F. (2015).* The black box society: The secret algorithms that control money and information. Harvard University Press.
- This book discusses how corporations use large swaths of data to pursue profits. The use of such data is surrounded by secrecy, making it difficult to discern whether or not the interests of individuals are being protected. The author argues that the decisions made by firms using data should be fair, non-discriminatory, and open to criticism. This requires eliminating the secrecy surrounding current practices and increasing the accountability of those using such data to make important decisions.
- Raghavan, M., et al. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 469-481). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372828
- This work conducts an in-depth analysis of the bias-related practices of vendors of algorithmic pre-employment assessments by examining the vendors’ publicly available statements. The authors find that it is important to consider technical systems together with the context surrounding their use and deployment. They conclude by offering several policy recommendations intended to reduce the risk of bias in the systems under consideration.
- Rosenblat, A., & Stark, L. (2016). Algorithmic labor and information asymmetries: A case study of Uber’s drivers. International Journal of Communication, 10(27), 3758–3784. https://doi.org/10.2139/ssrn.2686227
- This paper presents findings from an eight-month ethnographic study on Uber drivers. The authors argue that the Uber service configuration places the company in a position of power, and the app and its algorithms are structured to control workers. They argue that these power differentials are made greater through the misclassification of workers as independent contractors.
- Srnicek, N. (2017).* Platform capitalism. John Wiley & Sons.
- This book critically examines the emergence of platform capitalism, which is understood as the emergence of platform-based businesses. The author offers an analysis of the growth of platform capitalism in the broader history of capitalism’s development. They highlight the manner in which a small number of platform-based businesses are transforming the contemporary economy and how such businesses will need to adapt in the future in order to ensure sustainability.
- Steup, R., et al. (2019). Feeding the world with data: Visions of data-driven farming. DIS ’19: Proceedings of the 2019 on Designing Interactive Systems Conference. ACM Digital Library. https://doi.org/10.1145/3322276.3322382
- Data-driven farming practices that employ sensors, algorithms, and networking technologies to guide decision making have seen increased investment in recent years. Using critical discourse analysis of 34 agritech startup websites, the authors discern four future visions of agriculture promoted by these companies. By engaging in this speculative design practice, the authors contemplate the repercussions these scenarios might have on power relations between the farmer and other stakeholders.
- Shoshana, Z. (1988).* In the age of the smart machine: The future of work and power. Basic Books.
- This book discusses the computerization of the workplace and the manner in which it affects the work experience of labor and management. The author introduces the concept of Informating, which is understood as a process unique to information technology through which digitalization translates activities, objects, and events into information.
- Toxtli, C., et al. (2021). Quantifying the invisible labor in crowd work. In Proceedings of the ACM Conference on Human-Computer Interaction, 5(CSCW2), 1-26.
- Crowdsourcing markets are an increasingly popular, centralized location for workers to do online work from a requester on a crowdsourcing platform, such as Amazon Mechanical Turk. The authors discuss the invisible labor costs associated with this work. One source of invisible labor is hypervigilance, where the worker must remain on-call and vigilant to find good work. They also highlight the problem of digital labor platforms leveraging knowledge of high return job offerings to manipulate workers to stay longer on their platforms.
- Williams, J. D., et al. (2019). Technological workforce and its impact on algorithmic justice in politics. Customer Needs and Solutions, 6(3), 84-91. https://doi.org/10.1007/s40547-019-00103-3
- The authors argue that diversifying the workforce in the tech industry and incorporating inter-disciplinary education, such as principles of ethical coding, can help remedy the negative consequences of algorithmic bias. Allowing the diverse perspectives of tech employees to influence the development of algorithms will result in systems that incorporate a broad range of world views, and such systems are less likely to overlook the experiences of those belonging to groups that have been historically underrepresented.
- Wilson, C., et al. (2021, March). Building and auditing fair algorithms: A case study in candidate screening. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 666-677).
- This paper outlines a framework for auditing algorithms by conducting a case study on pymetrics, a candidate screening service for employers that screens candidates based on performance on a suite of games based on psychological studies. Unlike prior works, the authors conduct this audit in a cooperative fashion. They discuss the protocols used to conduct a fair and transparent audit with the cooperative format. Finally, they discuss the results of the audit and offer recommendations for future cooperative audits.
- Wood, A. J., et al. (2018). Good gig, bad big: Autonomy and algorithmic control in the global gig economy. Work, Employment and Society, 33(1), 56–57. https://doi.org/10.1177/0950017018785616
- The authors present the results of a study of online freelancing platforms. They argue that worker agency is shaped by a platform’s use of algorithmic control in remote work. While the use of these algorithms appears to offer workers more autonomy and flexibility, the authors point to other issues created by platform work, including low pay and long hours.
Chapter 44. Smart City Ethics: How “Smart” Challenges Democratic Governance (Ellen P. Goodman)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.53
- Ahvenniemi, H., et al. (2017). What are the differences between sustainable and smart cities? Cities, 60, 234–245. https://doi.org/10.1016/j.cities.2016.09.009
- The authors analyze 16 sets of city assessment frameworks. They find that smart city frameworks lack important environmental indicators and focus mainly on social and economic sustainability. Based on this observation, the authors argue for developing smart city frameworks to include environmental sustainability. The authors suggest replacing the term “smart cities” with “smart sustainable cities” to further highlight the importance of environmental sustainability.
- Bina, O., et al. (2020). Beyond techno-utopia and its discontents: On the role of utopianism and speculative fiction in shaping alternatives to the smart city imaginary. Futures, 115. https://doi.org/10.1016/j.futures.2019.102475
- The authors draw on works of speculative fiction to add to the social-scientific discourse surrounding potential smart city futures. They argue that techno-utopian fantasies offer unique modes of knowledge creation that may be ignored in academic contexts. The authors specifically note the genre’s focus on constructing new futures, highlighting the values and practices these potential realities reflect, and illustrating additional warning signs that warrant academic consideration.
- Brauneis, R., & Goodman, E. P. (2018).* Algorithmic transparency for the smart city. Yale Journal of Law & Technology, 20, 103-176.
- This article examines the limits of transparency around governmental deployment of big data analytics. The authors critique the opacity of governmental predictive algorithms and analyze predictive algorithm programs in local and state governments. Their analysis tests how impenetrable resulting black boxes are, and they assess whether open records processes would enable citizens to discover the policy judgements embodied by algorithms. The authors propose a framework for sufficient algorithm transparency for governments and public agencies.
- Brooks, B. A., & Schrubbe, A. (2016). The need for a digitally inclusive smart city governance framework. University of Missouri-Kansas City Law Review, 85(4), 943-952.
- This article examines how smart cities in urban and rural areas effectively create and deploy open data platforms for citizens and analyzes the considerations and differing governance mechanisms for rural cities compared to urban cities. The authors examine several cases of municipal smart technology adoption to explore policy options to distribute resources that address citizen needs in those areas.
- Caragliu, A., & Del Bo, C. F. (2021). Smart cities and urban inequality. Regional Studies. https://doi.org/10.1080/00343404.2021.1984421
- The authors seek to experimentally test the claim that smart cities are associated with increased rates of income inequality using data from European cities. They conduct a regression analysis to predict Gini income inequality given key city smartness indicators related to economic, environmental, and infrastructural health. The results suggest that smarter cities decrease urban income inequality, and that there need not be a trade-off between efficiency and equity.
- Cardullo, P., & Kitchin, R. (2019). Smart urbanism and smart citizenship: The neoliberal logic of ‘citizen-focused’ smart cities in Europe. Environment and Planning C: Politics and Space, 37(5), 813-830. https://doi.org/10.1177/0263774X18806508
- This article argues that models of smart cities in Europe endorse a market-focused view of urban development and citizenship. To support their claim, the authors perform a discourse analysis on policy documents and conduct interviews with key parties involved in facets of smart city development. They contend that future practices should center the needs of citizens and the community above the needs of the market.
- Cardullo, P., et al. (2018). Living labs and vacancy in the neoliberal city. Cities, 73, 44-50. http://dx.doi.org/10.1016/j.cities.2017.10.008
- This paper evaluates the role of living labs (LL) – technologies that foster local digital innovation to “solve” local issues – in the context of smart cities. The authors outline various approaches to LL and argue that LLs are actively used to bolster smart city discourse.
- Castelnovo, W., et al. (2016). Smart cities governance: The need for a holistic approach to assessing urban participatory policy making. Social Science Computer Review, 34(6), 724-739. https://doi.org/10.1177/0894439315611103
- This paper critically assesses the state of existing smart city success indicators, finding that work in the field typically analyzes each of the technical, governance, and human facets of smart city success separately. The authors propose a novel framework that captures all three dimensions to promote more holistic thinking in the smart cities discourse.
- Chamoso, P., et al. (2020). Smart city as a distributed platform: Toward a system for citizen-oriented management. Computer communications, 152, 323-332. https://doi.org/10.1016/j.comcom.2020.01.059
- This work explicates the need for the principles of modularity and ease of reuse in the design of smart city services to facilitate efficient integration across domains and developers. The authors highlight shortcomings of existing smart city management platforms and lay out desiderata for improved systems. They propose a novel architecture that performs well at scale, is highly reusable, and prioritizes transparency and the potential for civic engagement.
- Charnock, G., et al. (2019). From smart to rebel city? Worlding, provincializing and the Barcelona model. Urban Studies, 58(3), 581-600. https://doi.org/10.1177/0042098019872119
- This article tracks the evolution of the so-called “Barcelona Model” of urban transformation. The authors trace this evolution from an originally dogmatic vision of the smart city presented by a centre-right city council to a radically repurposed smart city model following the successes of the citizens’ platform Barcelona en Comu in 2015. The authors highlight the new council’s focus on enhancing participative democracy and securing digital rights and sovereignty for Barcelona residents. Despite the progressivity of these goals, they close by acknowledging some of the challenges that accompany the repurposing of smart technologies.
- Clark, J. (2020). Uneven innovation: The work of smart cities. Columbia University Press.
- This book links smart cities to wider trends in urban innovation and the production of markets. The author argues that smart cities should be understood primarily as an economic, rather than technological, issue. The smart city project is problematized, and the author shows the many ways in which it reinforces – rather than addresses – underlying patterns of inequality, precariousness, and powerlessness that characterize neoliberal city building more generally.
- Cugurullo, F. (2020). Urban artificial intelligence: From automation to autonomy in the smart city. Frontiers in Sustainable Cities, 2, 38. https://doi.org/10.3389/frsc.2020.00038
- This paper stresses the need to understand the development of artificial intelligence specifically in the context of urban development, a concept that the author calls “urban AI”. The author argues that the creation of autonomous cities necessitates understanding how AI and urban spaces co-evolve to better understand their interaction with smart city services and citizen empowerment. The author uses the case study of Masdar City to motivate a research agenda for future work in understanding autonomous cities.
- Eckhoff, D., & Wagner, I. (2017). Privacy in the smart city—Applications, technologies, challenges, and solutions. IEEE Communications Surveys & Tutorials, 20(1), 489–516. https://doi.org/10.1109/COMST.2017.2748998
- This paper attempts to systemize application areas, technologies, privacy types, and data sources to bring structure to the fuzzy concept of a “smart city.” The authors also review existing privacy-enhancing technologies and discuss promising directions for future research. The paper is meant to serve as a reference guide for the development of privacy-friendly smart cities.
- Edwards, L. (2016). Privacy, security and data protection in smart cities: A critical EU law perspective. European Data Protection Law Review, 2(1), 28-58.
- This paper argues that smart cities combine the three greatest threats to personal privacy: the Internet of Things, Big Data, and the Cloud. Edwards notes that current regulatory frameworks fail to effectively address these threats and discusses how and if EU data protection laws control these possible threats to personal privacy.
- Evans, J., et al. (2019). Smart and sustainable cities? Pipedreams, practicalities and possibilities. Local Environment, 24, 557–564. https://doi.org/10.1080/13549839.2019.1624701
- This paper is concerned with the potential of smart cities to enhance social well-being and reduce environmental impact. The authors argue that social equity and environmental sustainability are neither a priori absent nor de facto present in current smart city initiatives but must be deliberately included and maintained as smart cities materialize.
- Goodspeed, R. (2015). Smart cities: Moving beyond urban cybernetics to tackle wicked problems. Cambridge Journal of Regions, Economy and Society, 8(1), 79-92.
- This paper aims to describe institutions for municipal innovation and IT-enabled collaborative planning to address “wicked”, or inherently political, problems. The author proposes that smart cities, which use IT to pursue efficient systems through real-time monitoring and control, are equivalent to the idea of urban cybernetics debated in the 1970s. Drawing on Rio de Janeiro’s Operations Center, the author argues that wicked urban problems require solutions that involve local innovation and stakeholder participation.
- Guma, P. K. (2020). Smart city making? The spread of ICT-driven plans and infrastructures in Nairobi. Urban Geography, 42(3), 360-381. https://doi.org/10.1080/02723638.2020.1715050
- This article draws on observations, interviews, and policy analysis to explore smart city development in the city of Nairobi, Kenya. The author contrasts the ambitious visions of city planners with the ordinary realities of life within the city, arguing that technocratic approaches and deterministic appeals remain highly deceptive. In contrast to top-down and universalizing agendas, the author concludes that smart city processes remain politicized, contested, and shaped by local and context-specific realities.
- Halpern, O., et al. (2013). Test-bed urbanism. Public Culture, 25(2), 272-306. https://doi.org/10.1215/08992363-2020602
- This essay interrogates how ubiquitous computing infrastructures produce new forms of experimentation with urban territory. These protocols of “test-bed urbanism” are new methods for spatial development that are changing the form, function, economy, and administration of urban life.
- Joss, S., et al. (2019). The smart city as global discourse: Storylines and critical junctures across 27 cities. Journal of Urban Technology, 26(1), 3–34. https://doi.org/10.1080/10630732.2018.1558387
- This paper employs a systematic, webometric analysis of key texts associated with 5,553 cities worldwide to clarify and highlight the practical importance of smart cities. The authors find that the discourse about smart cities is centred around 27 predominately capital cities, and they argue that city “smartness” is closely linked to cities’ global presence and positioning. The authors conclude with a discussion of the resulting implications for research, policy, and practice.
- Karvonen, A., et al. (Eds.). (2018).* Inside smart cities: Place, politics and urban innovation. Routledge.
- This article explores the tensions within second-generation smart city experiments such as Barcelona. The article maps the shift from first-generation to second-generation policies developed by Barcelona’s liberal government and explores how concepts of technological sovereignty emerged. The authors reflect on the central tenants, potentialities, and limits of Barcelona’s Digital Plan and examine how the city’s new digital paradigm can address pressing urban challenges.
- Kitchin, R. (2014). The data revolution: Big data, open data, data infrastructures and their consequences. Sage Publications.
- This book analyzes contemporary advancements in the generation and analysis of data, arguing that reductions in cost and the robustness of available infrastructures have facilitated an emergent data revolution. The author argues that this data revolution is changing the way we understand knowledge, conduct business, and govern public and private spaces, while also raising important questions about surveillance, privacy, and more. The book provides a critical analysis of this data landscape, reviewing the technical and ethical dimensions of existing data infrastructures and data analytics.
- Kitchin, R. (2014). The real-time city? Big data and smart urbanism. GeoJournal, 79(1), 1-14. https://www.jstor.org/stable/24432611
- This article draws on various examples of pervasive and ubiquitous computing in smart cities to detail how urban spaces are being instrumented with Big Data-producing digital devices and infrastructure. While smart city advocates argue that Big Data can provide material for envisioning and enacting more efficient, sustainable, productive, and transparent cities, the author aims to critically reflect on the implications of big data and smart urbanism by analyzing five emerging concerns: the politics of big urban data, technocratic governance and city development, corporatization of city governance, hackable cities, and the panoptic city.
- Kitchin, R., et al. (2018).* Citizenship, justice and the right to the smart city. In P. Cardullo, C. Di Feliciantonio, & R. Kitchin (Eds.), The right to the smart city (pp. 1-24). Emerald Publishing Limited.
- This chapter engages the smart city in various practical, political, and normative questions relating to citizenship, social justice, and the public good. The authors detail some troubling ethical issues associated with smart city technologies and examine how citizens have been conceived and operationalized in the smart city, proposing that the “right to the smart city” should be a fundamental principle of smart city endeavors.
- Kitchin, R., & Dodge, M. (2019).* The (in) security of smart cities: Vulnerabilities, risks, mitigation, and prevention. Journal of Urban Technology, 26(2), 47-65. https://doi.org/10.1080/10630732.2017.1408002
- This article seeks to examine how smart city technologies that are designed to produce urban resilience and reduce risk paradoxically create new vulnerabilities in city infrastructure and threaten to open extended forms of criminal activity. Through identifying forms of smart city vulnerabilities and detailing several examples of urban cyberattacks, the authors analyze existing smart city risk mitigation strategies and propose a set of systemic interventions that extends beyond technical solutions.
- Krivý, M. (2018). Towards a critique of cybernetic urbanism: The smart city and the society of control. Planning Theory, 17(1), 8-30. https://doi.org/10.1177/1473095216645631
- This article engages with popular criticisms of the smart city. It highlights the limitations of these criticisms and advances an alternative critique of the smart city as the urban embodiment of Gilles Deleuze’s ‘society of control.’ The author argues that the smart city operates according to the modalities of second order cybernetics, understanding urban subjectivity through the flow of data and understanding politics as a matter of environmental-behavioral control.
- Leitheiser, S., & Follmann, A. (2019). The social innovation-(re)politicisation nexus: Unlocking the political in actually existing smart city campaigns? The case of SmartCity Cologne, Germany. Urban Studies, 57(4), 894-915. https://doi.org/10.1177/0042098019869820
- This article reflects on the smart city as the latest iteration of a post-political and neoliberal vision of urban governance. It argues that, as smart city visions give way to ‘actually existing’ strategies at the local level, they intersect and negotiate with place-specific contexts. To understand this translation, the author develops a Social Innovation-(Re)politicization Nexus (SIRN) for contesting and co-producing more transformative and locally contingent smart city visions. They explore this tension between top-down and bottom-up methods through a case study of SmartCity Cologne in Germany, arguing that innovative development must be accompanied by a re-politicizing of hegemonic logics and framings.
- Marvin, S., et al. (Eds.). (2015). Smart urbanism: Utopian vision or false dawn? Routledge.
- This book critically assesses “smart urbanism”: the rebuilding of cities through the integration of digital technologies with neighborhoods, infrastructures, and people, as a unique panacea to contemporary urban challenges. The authors explore what new capabilities are created by smart urbanism, by whom, and with what exclusions, as well as the material and social consequences of technological development and application. The book aims to identify and convene researchers, commentators, software developers, and uses within and outside mainstream smart urbanism discourses to assess which urban problems can be addressed by smart technology.
- Masucci, M., et al. (2020). The smart city conundrum for social justice: Youth perspectives on digital technologies and urban transformations. Annals of the American Association of Geographers, 110(2), 476-484. https://doi.org/10.1080/24694452.2019.1617101
- This paper questions whether smart cities will benefit and empower citizens in an equitable fashion by centering the perspectives of young people of color. Interviews reveal that young people believe smart city technology will not address fundamental social issues including drug use and homelessness. Moreover, the authors find a predominant belief that wealthier individuals with greater access to such technology will benefit disproportionately more than those at the margins of society.
- McFarlane, C., & Söderström, O. (2017).* On alternative smart cities: From a technology-intensive to a knowledge-intensive smart urbanism. City, 21(3-4), 312-328. https://doi.org/10.1080/13604813.2017.1327166
- This article explores the influence of corporate-led urban development in the smart urbanism agenda. Drawing on critical urban scholarship and initiatives across the Global North and South, the author examines steps towards an alternative smart urbanism where urban priorities and justice drive the use or lack of use of technology.
- Morozov, E., & Bria, F. (2018). Rethinking the smart city. Rosa Luxemburg Stiftung.
- This article provides a political-economic analysis of smart city development to critique the promises of cheap and effective smart city solutions to social and political problems. The authors propose that the smart city can only be understood within the context of neoliberalism as public city infrastructure and services are managed by private companies, thereby de-centralizing and de-personalizing the political sphere. In response, the authors offer alternative smart city models that rely on democratic data ownership regimes, grassroots innovation, and cooperative service provision models.
- O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
- This book aims to reveal how mathematical models used today are opaque, unregulated, uncontestable, and reinforce discrimination. The author reveals how black box models shape individual and collective futures and undermine democracy by exacerbating existing inequalities. It calls on engineers and policymakers to more responsibly develop and regulate the use of algorithms.
- Sánchez-Corcuera, R., et al. (2019). Smart cities survey: Technologies, application domains and challenges for the cities of the future. International Journal of Distributed Sensor Networks, 15(6). https://doi.org/10.1177/1550147719853984
- This survey paper provides context as to how smart cities have traditionally been conceived, referring particularly to the concept’s technological, human, and institutional dimensions. The authors highlight the role information & communication technologies play in developing smart cities, both in theory and in practice. The paper finishes by noting open challenges in the field and the technologies that have been implemented to actualize these visions.
- Shelton, T., et al. (2015).* The ‘actually existing smart city’. Cambridge Journal of Regions, Economy and Society, 8(1), 13-25. https://doi.org/10.1093/cjres/rsu026
- This paper aims to ground critiques of the smart city in a historical and geographic context. The authors closely focus on smart city policies in Louisville and Philadelphia (examples of “actually existing” smart cities rather than exceptional, paradigmatic centers such as Songdo or Masdar) to analyze how these policies arose and their unequal impact on the urban landscape. The authors argue that an uncritical, ahistorical, and aspatial understanding of data presents a problematic approach to data-driven governance and the smart city imaginary.
- Söderström, O., et al. (2014).* Smart cities as corporate storytelling. City, 18(3), 307-320. https://doi.org/10.1080/13604813.2014.906716
- This article examines corporate visibility and legitimacy in the smart city market. Drawing on actor-network theory and critical planning theory, this paper analyzes how IBM’s smarter city campaign tells a story aimed at making the company an obligatory passage point in the implementation of urban technologies and calls for the creation of alternative smart city stories.
- Stübinger, J., & Schneider, L. (2020). Understanding smart city—A data-driven literature review. Sustainability, 12(20). https://doi.org/10.3390/su12208460
- This paper systematically reviews the top 200 publications, according to Google Scholar, in the area of smart cities. Using methods from natural language processing (NLP) and time series forecasting, the authors identify the most relevant streams as smart infrastructure, smart economy & policy, smart technology, smart sustainability, and smart health. The authors provide a review of the literature in each stream, highlighting perceived strengths and weaknesses.
- Townsend, A. M. (2013). Smart cities: Big data, civic hackers, and the quest for a new utopia. WW Norton & Company.
- This book explores the history of urban information technologies to trace how cities have used and continue to use evolving technology to address increasingly complex policy challenges. The author analyzes the mass interconnected networks of contemporary metropolitan centers, drawing from examples of smart technology applications in cities around the world to document and examine emerging techno-urban landscapes. The author illuminates the motivations, aspirations, and shortcomings of various smart city stakeholders including entrepreneurs, municipal government officials, and software developers and investigates how these actors shape the urban futures.
- Trencher, G. (2019). Towards the smart city 2.0: Empirical evidence of using smartness as a tool for tackling social challenges. Technological Forecasting and Social Change, 142, 117–128. https://doi.org/10.1016/j.techfore.2018.07.033
- This paper compares the dominating, techno-economic and centralized approach of the “smart city 1.0” with the emergence of the so-called “smart city 2.0.” The smart city 2.0 is framed as a decentralized and people-centric approach, where smart technologies are employed as tools to tackle social problems. The paper examines Aizuwakamatsu Smart City in Fukushima, Japan, as a case study of the smart city 2.0.
- Trencher, G., & Karvonen, A. (2019). Stretching “smart”: Advancing health and well-being through the smart city agenda. Local Environment, 24(7), 610–627. https://doi.org/10.1080/13549839.2017.1360264
- This paper argues that contemporary smart cities focus primarily on stimulating economic activity and encouraging environmental protection, with less attention paid to social equity. The authors present a case study of Kashiwanoha Smart City in Japan, which they argue has stretched smart city activities beyond technological innovation to include the pursuit of greater health and well-being. Based on this case study, the authors contend that smart cities can tackle social problems, creating more equitable and liveable cities.
- Van Oers, L., et al. (2020). The politics of smart expectations: Interrogating the knowledge claims of smart mobility. Futures, 122(1). https://doi.org/10.1016/j.futures.2020.102604
- This article looks at the role of smart and ICT-enabled transport services in shaping the future of mobility in urban spaces. Drawing on two empirical case studies from Utrecht (the Netherlands) and Bordeaux (France), the authors explore the relationship between societal needs and developmental expectations. They argue that, as projects unfold, needs and vision become disentangled, leaving unachieved social benefits out of view and solutions deemed non-smart unexplored.
- Vanolo, A. (2014).* Smartmentality: The smart city as disciplinary strategy. Urban Studies, 51(5), 883-898. https://doi.org/10.1177/0042098013494427
- This article analyzes the power and knowledge implications of smart city policies that support new ways of imagining, organizing, and managing the city while impressing a new moral order to distinguish between the “good” and “bad” city. The author uses smart city politics in Italy as a case study to examine how smart city discourse has produced new visions of the “good city” and the role of private actors and citizens in urban management development.
- Wiig, A. (2018).* Secure the city, revitalize the zone: Smart urbanization in Camden, New Jersey. Environment and Planning C: Politics and Space, 36(3), 403-422. https://doi.org/10.1177/2399654417743767
- This paper analyzes the impacts of smart city agendas aligning with neoliberal urban revitalization efforts by examining redevelopment efforts in Camden, New Jersey. The author analyzes how Camden’s citywide multi-instrument surveillance network contributed to policing strategies that controlled the circulation of residents and prioritized the flow of capital into spatially bounded zones. The author underscores the crucial role of this surveillance-driven policing strategy in shifting the narrative of Camden from disenfranchised to economically and politically viable.
- Yigitcanlar, T., & Kamruzzaman, M. (2018). Does smart city policy lead to sustainability of cities? Land Use Policy, 73, 49–58. https://doi.org/10.1016/j.landusepol.2018.01.034
- This paper explores the connection between smart city policy and sustainability. Using data from 15 UK cities with differing “smartness” levels from 2005-2013, the authors find that the link between city smartness and carbon dioxide emissions is not linear. The authors call for increased scrutiny of existing smart cities and for smart city policy to better align itself with the goal of increased sustainability.
- Zandbergen, D., & Uitermark, J. (2020). In search of the smart citizen: Republican and cybernetic citizenship in the smart city. Urban Studies, 57(8), 1733-1748.
- This paper uses ethnographic research to understand the interplay between bottom-up notions of participatory democracy and top-down notions of surveillance in the political discourse surrounding smart cities. While these are typically seen as opposing and mutually exclusive views of smart citizenship, the authors argue that smart citizenship is better seen as the constant process of negotiating between these two forms of civic engagement.
An asterisk (*) after a reference indicates that it was included among the Further Readings listed at the end of the Handbook chapter by its author.