The Oxford Handbook of Ethics of AI: An Annotated Bibliography

This image has an empty alt attribute; its file name is dubber.oh_ethics_comp.1-e1581580140355.jpg

User’s Note | ➡︎ Supplement & Table of Contents

I. Introduction & Overview

Chapter 1. The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation (Joanna J. Bryson)⬆︎

  • Barocas, S., & Selbst, A. D. (2016). Big Data’s disparate impact. California Law Review104(3), 671–732. 
    • This article argues that algorithms are likely to discriminate based on inherited human biases, and that American antidiscrimination law fails to recognize and protect against this source of discrimination. The article discusses how difficult this gap is to close—technically, legally, and politically—and proposes that doing so will require reconsidering the fundamental legal definitions of discrimination and fairness.
  • Boden, M., et al. (2010).* Principles of robotics. Engineering and Physical Sciences Research Council (EPSRC). https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/
    • This document proposes a set of five ethical rules to guide the designers, builders, and users of robots. The rules were formulated with the purpose of introducing robots in a manner that inspires public trust and confidence, maximizes potential benefits, and avoids unintended consequences. The document asserts that human designers and users—and not robots themselves—are the appropriate subjects of robotics regulation because robots are tools which are not ultimately responsible for their actions.
  • Brundage, M., et al. (2018).* The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. University of Oxford, Future of Humanity Institute, University of Cambridge, Centre for the Study of Existential Risk, Center for a New American Security, Electronic Frontier Foundation, OpenAI. https://arxiv.org/abs/1802.07228
    • This report surveys the landscape of potential digital, physical, and political security threats from malicious uses of AI and proposes ways to better forecast, prevent, and mitigate these threats. The report focuses on identifying the sorts of attacks that are likely to emerge if adequate defenses are not developed and recommends a broad spectrum of effective approaches to face them.
  • Brundage, M., et al. (2020). Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213
    • This policy paper, authored by representatives of stakeholders from across academia, industry, and civil society, proposes a range of institutional and technical mechanisms to enable the verification of claims concerning AI development. These mechanisms and the accompanying policy recommendations aim to provide more reliable methods for ensuring safety, security, fairness, and privacy protection in AI systems.
  • Bryson, J. J. (2019).* The past decade and future of AI’s impact on society. In Towards a new enlightenment?: A transcendent decade (pp. 127–159). Openmind BBVA/Turner. https://www.bbvaopenmind.com/en/articles/the-past-decade-and-future-of-ais-impact-on-society/
    • This article reflects on the AI-induced social and economic changes that happened in the decade after smartphones were introduced in 2007, projects from this analysis to predict imminent political, economic, and personal challenges, and submits corresponding policy recommendations. The article argues that AI is less novel than is often assumed, and that its familiar challenges can be managed with appropriate regulation.
  • Bryson, J. J., et al. (2017).* Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law25(3), 273–291. https://doi.org/10.1007/s10506-017-9214-9
    • This article argues that conferring legal personhood on synthetic entities is morally unnecessary and legally problematic. It highlights the adverse consequences of certain noteworthy precedents and concludes that while giving AI legal personhood may have emotional or economic appeal, the difficulties of holding rights-violating synthetic entities to account outweigh these dubious considerations.
  • Cadwalladr, C. (2018, March 18).* ‘I made Steve Bannon’s psychological warfare tool’: Meet the data war whistleblower. The Guardianhttps://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump
    • This news article presents a profile of Christopher Wylie, the former research director of Cambridge Analytica who blew the whistle on the company’s illicit data-collection practices and influence campaign in the 2016 US presidential election.
  • Cath, C., et al. (2017). Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethicshttps://doi.org/10.1007/s11948-017-9901-7
    • This article provides a comparative assessment of reports issued by the White House, the European Parliament, and the UK House of Commons to outline their respective visions on how to prepare society for the widespread use of artificial intelligence. To follow up its critiques, the article proposes two supplementary measures: first, the creation of an international advisory council, and second, a commitment to ground visions of a “good AI society” on human dignity.
  • Clark, J., & Hadfield, G. K. (2019). Regulatory markets for AI safety. arXiv:2001.00078
    • This paper proposes a model of AI regulation whereby governments create a regulatory market requiring companies building or deploying AI to purchase private regulatory services. In this model, governments would directly license private regulators, with the goal of creating a market for regulatory services in which private regulators compete for the business of companies building and deploying AI.
  • Claxton, G. (2015).* Intelligence in the flesh: Why your mind needs your body much more than it thinks. Yale University Press.
    • This book argues—based on work in neuroscience, psychology, and philosophy—that human intelligence emanates from the body instead of the mind. With reference to examples like the endocrinal system, the book asserts that the body performs intelligent computations that people either overlook or falsely attribute to the brain. The book contends that the mind’s undeserved esteem has led to perverse social outcomes like the preference for white-collar over blue-collar labor. 
  • Cohen, J. E. (2013).* What privacy is for. Harvard Law Review126(7), 1904–1933. JSTOR. www.jstor.org/stable/23415061
    • This article argues that privacy—contrary to its image as an outdated, anti-progressive, and hence inessential ideal—is an essential precondition for people to be self-determining. The article asserts that competing imperatives like national security, efficiency, and entrepreneurship have been permitted to trample over privacy because it is perceived as an optional benefit to the inherent self-determining capacity of liberal agents. By contrast, the article asserts that the self is socially constructed, and that privacy is therefore an essential personal shield against the perverse influence tactics of commercial and government actors.
  • Dennett, D. C. (1978).* Why you can’t make a computer that feels pain. In Brainstorms: Philosophical essays on mind and psychology (1st ed., pp. 190–229). Bradford Books.
    • This essay argues that the ordinary concept of pain is incoherent and thus that any candidate for a pain-capable robot would be rejected by human judges because its experience would contradict at least one of our various intuitions about pain. The essay accepts that it is possible in principle for a robot to experience pain, and for humans to accept that it does, if a better physiological theory of pain is developed.
  • Dwivedi, Y. K., et al. (2021). Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management57, 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
    • This article collects contributions from an interdisciplinary group of researchers on the social, economic, and intellectual impact of AI on various domains, including industry, government, science, the humanities, and law. Each perspective discusses the challenges, opportunities, and research agenda for AI in the relevant domain.
  • Guihot, M., et al. (2017). Nudging robots: Innovative solutions to regulate artificial intelligence. Vanderbilt Journal of Entertainment & Technology Law20(2), 385–456. 
    • This article argues that public regulators can overcome the obstacles to their control of artificial intelligence (e.g. scarce public resources and the power of technology companies) and remedy the technology’s dangerous under-regulation by adopting a predictive two-step process: first, they would signal expectations to influence or “nudge” AI designers; and second, they would participate in and interact with relevant industries. These steps would permit regulators to gain expertise, competently assess risks, and develop appropriate regulatory priorities.
  • Gunkel, D. J. (2018).* Robot rights. MIT Press.
    • This book explores whether and to what extent robots can and should have rights. The book evaluates, analyzes, and ultimately rejects four key positions on this question before offering an alternative way of conceptualizing the social situation of robots and the implications they have for existing moral and legal systems. 
  • Hervey, M., & Lavy, M. (2021). The law of artificial intelligence. Sweet & Maxwell.
    • This book examines how existing civil and criminal law will apply to AI and explores the role of emerging laws designed specifically for regulating and harnessing AI. The topics covered in this book include liability arising in connection with the use of AI, the impact of AI on intellectual property, data protection, smart contracts, and the deployment of AI in legal services and the justice system.
  • Hüttermann, M. (2012).* DevOps for developers. Apress/Springer.
    • This book presents a practical introduction to “DevOps,” which is a set of practices that aim to  streamline the software delivery process by fostering collaboration between software development and IT operations.
  • Katyal, S. K. (2019). Private accountability in the age of artificial intelligence. UCLA Law Review66(1), 54–141. 
    • This article argues that artificial intelligence raises novel civil rights concerns whose resolution requires augmenting public regulation with private industry standards. The article contends that private industry standards, including codes of conduct, impact statements, and whistleblower protection, represent a new generation of accountability measures which have the potential to outperform ordinary regulation in civil rights enforcement.
  • Kearns, M., & Roth, A. (2019). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.
    • This book seeks to illustrate how human values can be integrated into algorithmic systems. It discusses issues concerning data privacy, algorithmic fairness, feedback loops, data-driven scientific research, and additional ethical issues, including transparency and accountability. The book highlights important trade offs facing the development and governance of algorithmic systems and offers policy proposals in response.
  • Kroll, J. A., et al. (2017).* Accountable algorithms. University of Pennsylvania Law Review165(3), 633–705. https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3
    • This article argues that foundational computer science techniques provide an optimal way of holding automated decision systems accountable, including in scenarios where outdated accountability mechanisms and legal standards fail to. According to the article, these techniques avoid the limitations of more popular proposals like source code and input transparency. The article suggests that using them may even improve the governance of decision-making in general.
  • List, C., & Pettit, P. (2011).* Group agency: The possibility, design, and status of corporate agents. Oxford University Press.
    • This book argues that group agents like companies, churches, and states are irreducible to the individual agents that constitute them, and that any legitimate approach to the social sciences, law, morality, and politics must take account of this fact. The book is grounded in ideas from social choice theory, economics, and philosophy.
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence267, 1-38. https://doi.org/10.1016/j.artint.2018.07.007
    • This article aims to situate the emerging field of explainable AI and algorithmic transparency within the broader context of social science research on human explanation. The article argues that explainable AI should build on this existing research, particularly insights from philosophy, social psychology, and cognitive science.
  • Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2133), 20180089. https://doi.org/10.1098/rsta.2018.0089
    • This article argues that the principles of democracy, rule of law, and human rights must be incorporated into AI by design and proposes a practical framework to guide this practice. According to the article, this practice is necessary to maintain the strength of constitutional democracy because (a) AI will eventually govern core functions of society and (b) the decoupling of technology from constitutional principles has already precipitated illegal and undemocratic behavior. The article considers which of AI’s challenges can be regulated by ethical norms and which demand the force of law.
  • OECD. (2019).* Recommendations of the council on artificial intelligence, OECD/LEGAL/0449 (No. 0449; OECD Legal Instruments). Organisation for Economic Co-operation and Development. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
    • This document presents the first intergovernmental standard on artificial intelligence. It aims to foster innovation and trust in AI by promoting responsible stewardship as well as human rights and democratic values. The document presents policy recommendations which include the “OECD Principles on AI”: (1) inclusive growth, sustainable development, and well-being; (2) human-centered values and fairness; (3) transparency and explainability; (4) robustness, security, and safety; and (5) accountability.
  • O’Neill, O. (2002).* A question of trust: The BBC Reith lectures 2002. Cambridge University Press.
    • This series of lectures explores whether modern democratic society’s debilitating “crisis of trust” can be solved by making people and institutions more accountable. Among other subjects, the lectures investigate whether the complex systems behind customary approaches to accountability improve or actually damage trust.
  • O’Reilly, T. (2017).* WTF: What’s the future and why it’s up to us. Harper Business.
    • This book argues that humans, and not machines, control the ultimate outcomes of technological progress. According to the book, current concerns about AI are misplaced because they focus on futuristic hypotheticals instead of the currently pressing—and crucially, familiar—problems that perverse market incentives drive tech companies to instigate. For instance, the book contemplates how markets incentivize corporations to use technology for cost-cutting efficiency instead of meaningful innovation.
  • Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Harvard University Press.
    • Recalling Isaac Asimov’s Three Laws of Robotics, this book proposes four new laws for governing AI. First, AI should complement professionals, not replace them. Second, AI should not counterfeit humanity. Third, AI should not intensify zero-sum arms races. Fourth, AI must always indicate the identity of their creator(s), controller(s) and owner(s). The book presents examples and case studies in healthcare, education, media, and other domains to support these new laws for the governance of AI.
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence1(5), 206-215. https://doi.org/10.1038/s42256-019-0048-x
    • This article contends that the current trend of attempting to explain the behavior and decisions of black box, meaning opaque, machine learning models is deeply flawed and potentially harmful. The article supports this contention by drawing on examples from healthcare, criminal justice, and computer vision, and proceeds to offer an alternative approach: building models that are not opaque, but inherently interpretable.
  • Santoni de Sio, F., & van den Hoven, J. (2018).* Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI5, 15. https://doi.org/10.3389/frobt.2018.00015
    • This article argues that there are two necessary conditions to implement “meaningful human control” over an autonomous system: (1) a “tracking” condition, under which the system must be responsive to the moral reasoning of its human designers and deployers and to morally relevant facts in its environment; and (2) a “tracing” condition, under which the system’s actions must always be attributable to at least one of its human designers or operators. The article notes that the principle of meaningful human control (and human moral responsibility) has gained traction as a solution to the “responsibility gap” created by autonomous systems.
  • Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology29(2), 353–400. 
    • This article argues that although artificial intelligence presents both conceptual and practical challenges to the legal system, it can be effectively regulated using a legal framework which imposes limited or strict tort liability on manufacturers and operators based on their passage (or not) of an AI certification process. The article explores the public risks that AI poses, the regulatory challenges it raises, the competencies of government institutions in managing those risks, and the possibility of regulating AI using differential tort liability. 
  • Shanahan, M. (2015).* The technological singularity. MIT Press.
    • This book explores the idea and implications of the “singularity”: the hypothetical event in which humans will be overtaken by artificial intelligence or enhanced biological intelligence. The book imagines and interrogates a range of possible scenarios for the event, including the possibility of superintelligent machines which challenge the ordinary concepts of personhood, responsibility, rights, and identity.
  • Sipser, M. (2006).* Introduction to the theory of computation (2nd ed.). Thomson Course Technology.
    • This textbook provides a comprehensive and approachable introduction to topics in theoretical computational theory. It conveys the fundamental mathematical properties of computer hardware, software, and applications with a blend of practical, philosophical, and mathematical discussion.
  • Wachter, S., et al. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law7(2), 76–99. https://doi.org/10.1093/idpl/ipx005
    • This article argues that the European General Data Protection Regulation (GDPR) does not—contrary to popular interpretation—afford a “right to explanation” of automated decision-making, and that its regulatory force is therefore diminished. According to the article, the defect is attributable to the legislation’s imprecise language and lack of well-defined rights and safeguards. The article recommends a series of specific legislative steps to improve the GDPR’s adequacy in this area.
  • Wachter, S., et al. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology (Harvard JOLT)31(2), 841–888. 
    • This article argues that many of the significant limitations of algorithmic interpretability and accountability can be overcome by pursuing explanations which help data subjects act on, instead of understanding, automated decisions. The article proposes three aims for explanations which serve this purpose: (1) to convey the rationale of the decision, (2) to provide grounds to contest the decision, and (3) to suggest viable steps to achieving a more favorable future decision. The article asserts that counterfactuals are an ideal means of explaining automated decisions because they satisfy these aims.

Chapter 2. The Ethics of the Ethics of AI (Thomas M. Powers and Jean-Gabriel Ganascia)⬆︎

  • Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge University Press. https://doi.org/10.1017/CBO9780511978036
    • This edited volume presents essays which consider, among other subjects: why it is necessary to implement ethical capacities in autonomous machines, what is required to implement them, potential approaches to implementing them, as well as philosophical and practical challenges to the study of machine ethics.
  • Arkin, R. C. (2009).* Governing lethal behavior in autonomous robots. Chapman & Hall/CRC Press.
    • This book considers how to develop autonomous robots which use lethal force ethically. Contemplating the possibility of robots being more humane than humans on the battlefield, the author examines the philosophical basis, motivation, and theory of ethical control systems in robots, and presents related design recommendations. 
  • Awad, E., et al. (2018).* The Moral Machine experiment. Nature, 563(7729), 59–64. https://doi.org/10.1038/s41586-018-0637-6
    • This article describes the results of deploying an online experimental platform, the Moral Machine, to generate a large global dataset aggregating real human responses to the moral dilemmas faced by autonomous vehicles. It presents findings on global and regional moral preferences, as well as findings on demographic and culture-dependent variations in moral preferences. The authors discuss how these findings can contribute to developing global, socially acceptable principles for machine ethics.
  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
    • This book argues that if machines surpass humans in general intelligence, then superintelligent machines could replace humans as the dominant lifeform on Earth. The book imagines various paths along which this event transpires and considers how humans could anticipate and manage the existential threat it poses.
  • Brey, P. A. E. (2012).* Anticipatory ethics for emerging technologies. NanoEthics, 6(1), 1–13. https://doi.org/10.1007/s11569-012-0141-7
    • This article presents an original approach, “anticipatory technology ethics” (ATE), to the ethics of emerging technology. The article evaluates alternative approaches and formulates ATE in their context. The article argues that uncertainty is a central obstacle to the ethical analysis of emerging technology, and therefore that forecasting- and prediction-oriented approaches are necessary to reach useful ethical conclusions about emerging technology.
  • Brundage, M. (2014). Limitations and risks of machine ethics. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 355–372. https://doi.org/10.1080/0952813X.2014.895108
    • This article argues that “machine ethics” remains inadequate in terms of achieving its intended social outcomes. The author discusses that several inherent limitations prohibit machine ethics from making any sort of guarantee of ethical machine behavior. These limitations involve the computational limits of AI, and the nature of ethical decision accountability, and consequences as situated within the context of complex real-world settings. The article contends that even if the technical challenges of machine ethics were resolved, the machine ethics concept would likely still retain a degree of inadequacy.
  • Cave, S., et al. (2019). Motivations and risks of machine ethics. Proceedings of the IEEE107(3), 562–574. https://doi.org/10.1109/JPROC.2018.2865996
    • This article surveys reasons for and against pursuing machine ethics, here understood as research aiming to build “ethical machines.” Clarifying some of the philosophical issues surrounding the field and its goals, the authors ask several foundational questions about the opportunities and risks that machine ethics presents. For example, under what conditions is a given moral reasoning system likely to enhance the ethical alignment of machines and, more importantly, under what conditions are such systems likely to fail? How might machines deal adequately with value pluralism, especially in cases in which a single, definite answer is inappropriate? If conditions exist that would justify granting a form of moral status (e.g., moral patiency) to (suitably advanced) machines, how might automated ethical reasoning threaten to undermine human moral responsibility?
  • Crawford, K. (2021) Atlas of AI. Yale University Press.
    • This book characterizes AI as a technology of extraction, from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from its human users. The author assesses AI in terms of a web of power relations that are dynamically reshaping how and why the questions raised by AI ethics are prioritized.
  • Dehaene, S., et al. (2017).* What is consciousness, and could machines have it? Science358(6362), 486–492. https://doi.org/10.1126/science.aan8871
    • This article argues that despite recent advances in artificial intelligence, current machines predominantly perform computations that reflect basic unconscious processing (“C0”) in the human brain. The article contends that the standard for synthetic consciousness must be the human brain, and that since machines don’t perform computations which are comparable to conscious human processing (“C1” and “C2”), they can’t be called conscious.
  • Dennett, D. C. (1987).* The intentional stance. MIT Press.
    • This book argues that entities understand and anticipate one another’s behavior by adopting a predictive strategy of interpretation—the “intentional stance”—that treats the entity under examination as if it were a rational agent which makes choices based on its beliefs and desires. According to this argument, entities which adopt the intentional stance reason deductively from hypotheses about their subject’s beliefs and desires to conclude what they ought to decide in a given situation and from there predict what they will actually do in that situation.
  • Etzioni, A., & Etzioni, O. (2017).* Incorporating ethics into artificial intelligence. The Journal of Ethics, 21(4), 403–418. https://doi.org/10.1007/s10892-017-9252-2
    • This article argues that it is unnecessary to confer moral autonomy on artificially intelligent machines because we can readily guarantee ethical behavior from them by programming them with the existing instructions of law and their owners’ individual moral preferences. According to the article, many of the moral decisions facing AI machines are not discretionary and therefore easily automated because they are dictated by law. In cases where the decisions are discretionary, the article proposes that AI machines “read” and adhere to their owner’s moral preferences.
  • Greene, D., et al. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Hawaii International Conference on System Scienceshttps://doi.org/10.24251/HICSS.2019.258
    • Using frame analysis to examine recent high-profile value statements endorsing ethical design for artificial intelligence and machine learning, this conference paper draws two broad conclusions. The first conclusion is that these value statements assume a deterministic vision of Artificial Intelligence/Machine Learning (AI/ML), the ethics of which are best addressed through technical and design expertise. Therefore, there is no meaningful indication in these statements that AI/ML can be limited or constrained. Secondly, while the ethical design parameters suggested by these statements echo processual elements and contextual framing of critical methodologies in science and technology studies (STS), these statements lack this critical scholarship’s explicit focus upon normative ends devoted to social justice or equitable human flourishing. Rather, the “moral background” of these statements appears to be closer to a version of conventional business ethics than to more radical traditions of social and political justice active today.
  • Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. MIT Press.
    • Focusing upon the question of machines as morally considered, this book examines what selected decision criteria, categories, and framings reveal about the embedded intentions of moral status ascription. Arguments that consider machines as potential moral agents perform acts of exclusion or inclusion in attributing or denying moral agency or moral patiency. Such actions, the author states, involve assumptions of anthropocentricity that effectively designate the machine as an instrument of human use. This book presents a philosophical, Levinas-infused approach toward the construction of machine ethics and its associated categories. 
  • Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120. https://doi.org/10.1007/s11023-020-09517-8
    • This paper’s overview of the field of AI ethics points to the ongoing separation between the abstract values of ethics and the technical discourses of actual implementation. This disciplinary gap, the author observes, characterizes a field that ultimately lacks strong reinforcement mechanisms; AI ethics in some settings is pursued as an aspect of a marketing strategy rather than as a fundamental design principle. The author states that it is necessary to build tangible bridges between abstract values and technical implementations. However, a focus on technological phenomena should not dilute or preclude a focus on genuinely social aspects and on practitioner self-responsibility.
  • Hoffmann, C. H. & Hahn, B. (2020). Decentered ethics in the machine era and guidance for AI regulationAI and Society35(3), 635–644. https://doi.org/10.1007/s00146-019-00920-z
    • This paper asserts that while a large number of AI ethics guidelines appear to propose concrete ideas, few proposals are philosophically sound. Guidelines worded by government, nonprofit communications, or marketing-focused departments might not have considered philosophical or practical implications fully. Viewing this situation critically so as to ground policy steps that are prepared to take questions of AI ethics and AI moral status into account, the authors pursue a number of broad questions, including: what are ethical AI systems? What is the moral status of AI?  To what extent are machine ethics necessary? What are the implications on AI regulations? Drawing from selected specific cases, this paper is meant to serve as a point of departure for the development of philosophically informed policies.
  • Horty, J. F. (2001).* Agency and deontic logic. Oxford University Press.
    • This book develops deontic logic—that is, the logic of ethical concepts like obligation and permission—against the background of a formal theory of agency. It rejects the common assumption that what an agent ought to do is the same as what it ought to be that the agent does. By drawing on elements of decision theory, the book presents an alternative and novel account of what agents and groups of agents ought to do under various conditions and over extended periods of time.
  • Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence1, 389–99. https://doi.org/10.1038/s42256-019-0088-2
    • Conducting a review of recently issued grey literature and related soft-law documents, the authors map and analyze a current corpus of principles and guidelines on ethical AI. A global convergence emerges around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility, and privacy). However, the authors state, there is substantive divergence regarding how these principles are interpreted, why they are deemed important, which issues, domains or actors they pertain to, and how they should be implemented. This paper’s findings highlight the importance of integrating guideline-development efforts with both substantive ethical analysis and adequate implementation strategies.
  • Kurzweil, R. (2006).* The singularity is near: When humans transcend biology. Penguin Books.
    • This book envisions an event, the “singularity,” in which humans merge with machines, and portrays what life might be like afterwards. It speculates that, by overcoming biological limitations, the combination of human and machine abilities will solve exigent problems like the inevitability of death, environmental degradation, and world hunger. The book goes further to consider the broader social and philosophical consequences of this paradigm shift. 
  • Lin, P., et al. (Eds.). (2017).* Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.
    • This edited volume, aimed at academic audiences, policymakers, and the broader public, presents a global and interdisciplinary collection of essays that focuses on emerging issues in the interdisciplinary field of “robot ethics.” This field studies the effects of robotics on ethics, law, and policy. Organized into four parts, the first concerns moral and legal responsibility, and questions that arise in programming under moral uncertainty. The second part addresses anthropomorphizing design, and related issues of trust and deception within human-robot interactions. A third section concerns applications ranging from love to war. The fourth section speculates upon the possible implications and dangers of artificial beings that exhibit superhuman mental capacities.
  • Lokhorst, G. J. C. (2011). Computational meta-Ethics: Towards the meta-ethical robot. Minds & Machines21, 261–274. https://doi.org/10.1007/s11023-011-9229-z
    • Drawing from the same tools as have been used in computational metaphysics, this paper presents a proof of concept for a meta-ethical robot. This proof of concept serves as an opening to other pathways and themes that are mentioned as potential next steps in building a robot with the capacity to reason about its own reasoning. A robot with an extensive set of rules at its disposal can operate with great success within complex pattern matching tasks but will not exhibit meta-ethical capacities that the author argues could define the next phase of ethical robots. 
  • Mabaso, B.A. (2020). Computationally rational agents can be moral agents. Ethics and Information Technologyhttps://doi.org/10.1007/s10676-020-09527-1
    • This article advances an argument and model for artificial moral agency based on a framework of computational rationality. Asserting that the capacities required for artificial moral agency, as well as the aspects of functional consciousness that underpin them are computable, the author proposes a conceptual model for a bounded-optimal, computationally rational artificial moral agent. The author states that computational rationality can be an integrative element that can combine both the scientific and philosophical elements of artificial moral agency in a logically consistent way.  
  • Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4
    • This article argues that the principled approach upon which AI ethics has converged is unlikely to succeed like its close analogue in medical ethics because, compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. The article cautions against validating any newly emerging consensus around principles of AI ethics.
  • Powers, T. M. (2006).* Prospects for a Kantian machine. IEEE Intelligent Systems, 21(4), 46–51. https://doi.org/10.1109/MIS.2006.77
    • This article discusses the potential of creating ethical machines based on rule-based ethical theories like Kantian ethics with a focus on the challenges that this approach poses. According to the article, many view rule-based ethical theories as promising for machine ethics because their judgements exhibit a computational structure that might permit their computerization. The article explores and evaluates different approaches via which a rule-based ethical theory could be used as the basis for an ethical machine.
  • Segun, S. T. (2021). From machine ethics to computational ethics. AI & Society36, 263–276. https://doi.org/10.1007/s00146-020-01010-1
    • This paper argues that the appellation ‘machine ethics’ does not sufficiently capture the project of embedding ethics into the computational environment. The author analyzes the thematic distinction between robot ethics, machine ethics, and computational ethics, and offers a four-pronged justification as to why computational ethics presents a prima facie description of the project of embedding ethics into artificial intelligence systems. In making this case, the author categorizes attempts to program ethics into AI systems, attempts to build an artificial moral agent, and endeavors to simulate consciousness in machines as belonging under the purview of computational ethics.
  • Tonkens, R. (2009). A challenge for machine ethics. Minds and Machines19(3), 421–438. https://doi.org/10.1007/s11023-009-9159-1
    • Arguing that the development of Kantian artificial moral machines is itself anti-Kantian, this paper asserts that machine ethicists must look elsewhere for an ethic to implement into their machines. In making his case, the author approaches three main ideas of Kantian ethics. (1) the foundations of moral agency; (2) the role of the categorical imperative in moral decision making; and (3) the concept of duty. The author explains how Kantian machines would not possess free will and would arguably not be viewed as ends in themselves; this creates a volitional inconsistency. In order to be treated as an end in itself, a Kantian machine would need to possess dignity, be deserving of respect by all humans (i.e., all other moral agents), and be valued as an equal member in the moral community. These conditions not being met, the proposed Kantian ethic is problematic in the machine ethics case. 
  • Wallach, W., & Allen, C. (2009).* Moral machines: Teaching robots right from wrong. Oxford University Press.
    • This book argues that as robots are given more and more responsibility, there is a corresponding imperative to make them capable of morally aware decision-making. It goes further to assert that while achieving full moral agency for machines is a distant goal, the imperative is already urgent enough to require measures which introduce basic moral considerations into robotic decision-making.
  • Wallach, W., et al. (2008).* Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI & Society22(4), 565–582. https://doi.org/10.1007/s00146-007-0099-0
    • This article outlines the values and limitations of bottom-up and top-down approaches to constructing morally intelligent artificial agents. According to the article, bottom-up approaches are characterized by the combination of subsystems into a complex assemblage which models behavior that is consistent with ethical principles. By contrast, the article explains that top-down approaches involve the direct computerization of ethical principles as prescriptive rules.
  • Weinberger, D. (2011).* Too big to know: Rethinking knowledge now that the facts aren’t the facts, experts are everywhere, and the smartest person in the room is the room. Basic Books.
    • This book argues that Internet Era shifts in the production, exchange, and storage of knowledge—far from signalling a systemic collapse—present a fundamental epistemic breakthrough. The book contends that although the authority of ordinary facts, books, and experts has depreciated in the transition, “networked knowledge” permits knowledge-seekers to attain better understanding and make more informed decisions. 

Chapter 3. Ethical Issues in Our Relationship with Artificial Entities (Judith Donath)⬆︎ 

  • Bankins, S., & Formosa, P. (2019).* When AI meets PC: Exploring the implications of workplace social robots and a human-robot psychological contract. European Journal of Work and Organizational Psychology29(2), 215–229. https://doi.org/10.1080/1359432x.2019.1620328
    • This article draws attention to the lack of research surrounding employees’ increasing engagement and subsequent contract with emotionally sophisticated forms of AI technologies. Through a thought experiment, the authors examine the potential impacts of psychological contracts between employees and AI technologies, including unequal receipt-benefit exchanges and the ‘deskilling’ of human employees.  
  • Bisconti Lucidi, P., & Nardi, D. (2018). Companion robots: The hallucinatory danger of human-robot interactions. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 17-22). Association for Computing Machinery. https://doi.org/10.1145/3278721.3278741      
    • Focusing mainly on robots caring for the elderly, this paper analyzes ethical concerns raised by the rise of companion robots to distinguish which concerns are directly ascribable to robotics, and which are instead pre-existent. The paper argues that one concern, the “deception objection,” namely the ethical unacceptability of deceiving the user about the simulated nature of the robot’s behavior, is inconsistently formulated. The paper’s central argument is that the main concern about companion robots is the simulation of a human-like interaction in the absence of an autonomous robotic horizon of meaning.
  • Bourne, C. (2019). AI cheerleaders: Public relations, neoliberalism and artificial intelligence. Public Relations Inquiry8(2), 109-125.      
    • The article combines public relations (PR) theory, communications theory and political economy to consider the changing shape of neoliberal capitalism, as AI becomes naturalized as “common sense” and as a “public good.” The article explores how PR supports AI discourses, including promoting AI in national competitiveness and promoting “friendly” AI to consumers, while promoting Internet inequalities.
  • Broom, D. M. (2014).* Sentience and animal welfare. Centre for Agriculture and Biosciences International.      
    • This book focuses on sentience—the ability to feel, perceive, and experience—in order to answer questions raised by the animal welfare debate, such as whether animals experience suffering in life and death. The book defines aspects of sentience such as consciousness, memory and emotions, and discusses the brain complexity in detail. Looking at sentience from a developmental perspective, it analyses when during an individual’s growth sentience can be said to appear and uses evidence from a range of studies investigating embryos, fetuses, and young animals to form an overview of the subject.
  • Calo, R. (2015).* Robotics and the lessons of cyberlaw. California Law Review103(3), 513-563.      
    • This article examines the implications of the introduction of robotics for cyberlaw and policy. The article argues that robotics will prove exceptional in the sense of occasioning systemic changes to law, institutions, and the legal academy. However, the article also argues that many core insights and methods of cyberlaw will prove crucial in integrating robotics.
  • Carpenter, J. (2013). Just doesn’t look right: Exploring the impact of humanoid robot integration into explosive ordnance disposal teams. In R. Luppicini (Ed.), Handbook of research on technoself: Identity in a technological society (pp. 609-636). IGI Global.
    • This chapter analyzes the potential short and long-term outcomes of Explosive Ordnance Disposal (EOD) specialists, who work closely with anthropomorphic robots daily at work. Carpenter argues that even if these robots are designed to be more human-like, it may result in ethical and emotional consequences on EOD specialists. 
  • Coeckelbergh, M. (2010). Artificial companions: Empathy and vulnerability mirroring in human-robot relations. Studies in Ethics, Law, and Technology, 4(3), 1-17.      
    • This article argues that the possibility and future of robots as companions depends, among other things, on the robots’ capacity to be a recipient of human empathy, and that one necessary condition for this to happen is that the robots mirror human vulnerabilities. The article considers the objection that vulnerability mirroring raises the ethical issue of deception. It refutes this objection by demonstrating that the underlying assumptions to the objection cannot be easily justified, given the importance of appearance in social relations, problems with the concept of deception, and contemporary technologies that question the artificial-natural distinction.
  • Chesterman, S. (2020). Artificial intelligence and the limits of legal personality. International and Comparative Law Quarterly, 69(4), 819-844. https://doi.org/10.1017/S0020589320000366
    • Many researchers argue that due to their rapid advancement, AI systems should be entitled to some sort of legal status comparable to humans. This author finds that while many legal systems are able to create such statuses, there is a lack of theoretical and empirical evidence to prove the claims that AI systems should be entitled to them. 
  • Coghlan, S., et al. (2019). Could social robots make us kinder or crueller to humans and animals? International Journal of Social Robotics11(5), 741-751.
    • Concentrating on robot animals, this paper examines strengths and weaknesses to the idea of a causal link between cruelty and kindness to artificial and living beings, human or animal. The article finds that there is some basis for thinking that social robots may causally affect virtue, especially in terms of the moral development of children and responses to nonhuman animals.
  • Colledanchise, M. (2021). A new paradigm of threats in robotics behaviors. arXiv preprint arXiv:2103.13268
    • This article assesses the various security threats that result in the advancement of robot creation. Colledanchise emphasizes the potential for humans to use the robots for malicious intent, tampering with them, or manipulating the robot to deliberately do harmful or illegal things. They also found that these risks can be mitigated through task planning, skilled operators, and rescue squads. 
  • Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human–robot co-evolution. Frontiers in Psychology9, 468. https://doi.org/10.3389/fpsyg.2018.00468      
    • This article proposes a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction and rebuts arguments that condemn “anthropomorphism-based” social robots a priori. To address the relevant ethical issues, this article promotes an experimentally based ethical approach to social robotics, titled “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.
  • DePaulo, B. M., et al. (1996).* Lying in everyday life. Journal of Personality and Social Psychology70(5), 979-995. 
    • This article compares two diary studies of lying, where 77 college students reported telling two lies a day, and 70 community members told one. Consistent with the view of lying as an everyday social interaction process, participants said that they did not regard their lies as serious and did not plan them much or worry about being caught. Still, social interactions in which lies were told were less pleasant and less intimate than those in which no lies were told.
  • Donath, J. (2019).* The robot dog fetches for whom? In Z. Papacharissi (Ed.), A networked self and human augmentics, artificial intelligence, sentience (pp. 10-24). Routledge.                   
    • This article examines the landscape of social robots, including robot dogs, and their effect on human empathy and relationships. Particularly, this article questions whom robot companions will truly serve in a future where they are ubiquitous.
  • Eimler, S. C., et al. (2010). Prerequisites for human-agent-and human-robot interaction: Towards an integrated theory. University of Duisburg-Essen.
    • This article asserts that examining the long-term effects on relationships between humans and robots is extremely important. The authors argue that this type of study can be done, but only if the complexities of human-human interactions are first understood. The authors offer a robust framework to examine these interactions, including nonverbal and verbal behavior that can be applied to human-robot relationships. 
  • Godfrey-Smith, P. (2016).* Other minds: The octopus and the evolution of intelligent life. William Collins.    
    • This book explores the evolution and nature of consciousness, explaining that complex active bodies that enable and require a measure of intelligence have evolved three times, in arthropods, cephalopods, and vertebrates. The book reflects on the nature of cephalopod intelligence, in particular, constrained by their short lifespan, and embodied in large part in their partly autonomous arms which contain more nerve cells than their brains.
  • Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology20(4), 291-301.      
    • The animal-robot analogy is one of the most commonly used in attempting to frame interactions between humans and robots, and it also tends to push in the direction of blurring the distinction between humans and machines. This article argues that, despite some shared characteristics, analogies with animals are misleading when it comes to thinking about the moral status of humanoid robots, legal liability, and the impact of treatment of humanoid robots on how humans treat one another.
  • Kaplan, F. (2004).* Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots. International Journal of Humanoid Robotics1(3), 465-480.      
    • This article presents a preliminary exploration of several aspects of the Japanese culture and a survey of the most important myths and novels involving artificial beings in Western literature. The article examines particular cultural features that may account for contemporary differences in our behavior towards humanoids.
  • Kappas, A., et al. (2020). Communicating with robots: What we do wrong and what we do right in artificial social intelligence, and what we need to do better. In R. J. Sternberg & A. Kostić (Eds.), Social intelligence and nonverbal communication (pp. 233-254). Palgrave Macmillan.                            
    • This chapter discusses the challenges and pitfalls regarding the interaction of humans and machines with a view to (artificial) social intelligence and a time of challenging interdisciplinary research. The chapter presents concrete examples of such research and points out lacunae in empirical data.
  • Nyholm, S., & Smids, J. (2019). Can a robot be a good colleague? Science and Engineering Ethicshttps://doi.org/10.1007/s11948-019-00172-6              
    •  This article compares the question of whether robots can be good colleagues to the more widely discussed questions of whether robots can be our friends or romantic partners. The paper argues that on a behavioral level, robots can fulfil many of the criteria typically associated with being a good colleague. This paper further asserts that in comparison with the more demanding ideals of being a good friend or a good romantic partner, it is comparatively easier for a robot to live up to the ideal of being a good colleague.
  • Remmers, P. (2019). The ethical significance of human likeness in robotics and AI. Ethics in Progress10(2), 52-67.      
    • This article argues that there are no serious ethical issues involved in the theoretical aspects of technological human likeness, suggesting instead that although human likeness may not be ethically significant on the philosophical and conceptual levels, strategies to use anthropomorphism in the technological design of human-machine collaborations are ethically significant. This is because artificial agents are specifically designed to be treated in ways we usually treat humans.
  • Remmers, P. (2021). The artificial nature of social robots: A phenomenological interpretation of two conflicting tendencies in human-robot interaction. In M. Nørskov, J. Seibt, & O. S. Quick (Eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020 (pp. 78-85). IOS Press.      
    • This article argues that the effects of anthropomorphism or zoomorphism (designing robots as subjects of agency) motivate two opposing tendencies within the ethics of robots and their relationship to humans. The first being the ‘rational’ tendency, which denounces anthropomorphism, claiming it is merely an illusion, and the second being the ‘visionary’ tendency, or the relational reality between humans and robots. Remmers claims this contention cannot be mediated through an analogy of the treatment of robots and the perception of objects utilizing a dominant theory of image perception. 
  • Renzullo, D. (2019). Anthropomorphized AI as capitalist agents: The price we pay for familiarity. Montreal Ethics AI Institute. https://montrealethics.ai/anthropomorphized-ai-as-capitalist-agents-the-price-we-pay-for-familiarity/
    • This report argues that the anthropomorphic design of AI technology is used as a channel to perpetuate capitalism through a process of social acclimatization. AI is designed to elicit attachment so that the technology becomes integrated and relied upon in everyday life. 
  • Singer, P. (2011).* Practical ethics. Cambridge University Press.    
    • This book is a classic introduction to the study of practical ethics. The focus of the book is the application of ethics to difficult and controversial social questions: equality and discrimination by race, sex, ability, or species; abortion, euthanasia, and embryo experimentation; the moral status of animals; political violence and civil disobedience; overseas aid and the obligation to assist others; responsibility for the environment; the treatment of refugees. The book is structured to show how contemporary controversies often have deep philosophical roots, and presents a unique ethical theory that can be applied consistently to all the practical cases.
  • Sheridan, T. B. (2020). A review of recent research in social robotics. Current Opinion in Psychologyhttps://doi.org/10.1016/j.copsyc.2020.01.003    
    • This review finds that both because of its newness and because of its narrower psychological rather than technological emphasis, research in social robotics tends currently to be concentrated in a single journal and single annual conference. This review categorizes such research into three areas: (a) Affect, Personality, and Adaptation; (2) Sensing and Control for Action; and (3) Assistance to the Elderly and Handicapped.
  • Turing, A. (1950).* Computing machinery and intelligence. Mind, 49, 433-460.     
    • This is a seminal paper on the topic of artificial intelligence, the first to introduce Alan Turing’s concept of what is now known as the Turing Test to the general public. The article investigates the prospects of a tool emerging that would be able to flawlessly function as a spatial extension of the human intellect.
  • Turkle, S. (2007).* Authenticity in the age of digital companions. Interaction Studies8(3), 501-517.      
    • This paper examines watershed moments in the history of human–machine interaction, focusing on the pertinence of relational artifacts to our collective perception of aliveness, life’s purposes, and the implications of relational artifacts for relationships. The paper argues that the exploration of human–robot encounters leads to questions about the morality of creating believable digital companions that are evocative but not authentic.
  • Vanman, E. J., & Kappas, A. (2019). “Danger, Will Robinson!” The challenges of social robots for intergroup relations. Social and Personality Psychology Compass13(8). https://doi.org/10.1111/spc3.12489    
    • This article explores the paradox created by human-like robots, as they simultaneously generate greater empathy than traditional robots while also eliciting greater suspicion, particularly about their ability to deceive. Discussing these findings from an intergroup relations perspective, this article proposes three research questions that the authors believe social psychologists are ideally suited to address.
  • Wallkötter, S., et al. (2020). A robot by any other frame: Framing and behaviour influence mind perception in virtual but not real-world environments. In T. Belpaeme & J. Young (Eds.), Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 609-618). Association for Computing Machinery.
    • In a series of three experiments relating to mind perception in human-robot interaction, the authors identify two factors that could potentially influence perception and moral concern in robots: how the robot is introduced (framing), and how the robot acts (social behavior). They find that both framing and behavior independently influence participants’ mind perception. However, when both variables are combined in a real-world experiment, these effects failed to replicate, resulting in a third perennial factor: the online versus real-world nature of the interactions.
  • Weizenbaum, J. (1976).* Computer power and human reason. W. H. Freeman and Company.  
    • This book examines the sources of the computer’s powers and offers evaluative explorations of what computers can do, cannot do, and should not be employed to do. The author argues that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities, such as compassion and wisdom, that are necessary for genuine choice to take place.
  • Weizenbaum, J. (1967).* Contextual understanding by computers. Communications of the ACM10(8), 474-480.       
    • This paper discusses a further development of a computer program, ELIZA, capable of conversing in natural language, stressing the importance of context to both human and machine understanding. The paper argues that the adequacy of the level of understanding achieved in a particular conversation depends on the purpose of that conversation, and that absolute understanding on the part of either humans or machines is impossible.

II. Frameworks & Modes

Chapter 4. AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing (Karen Yeung, Andrew Howes and Ganna Pogrebna)⬆︎

  • Adams, R., & Loideáin, N. N. (2019). Addressing indirect discrimination and gender stereotypes in AI virtual personal assistants: The role of international human rights law. Cambridge International Law Journal, 8(2), 241-257. https://doi.org/10.4337/cilj.2019.02.04
    • This article explores how the obligation to protect women from discrimination under international human rights law applies to AI virtual assistants. In particular, the article focuses on gender stereotyping associated with AI virtual assistants, including systems that use female names, voices, and characters.
  • Algorithm Watch. (2019).* AI Ethics Guidelines Global Inventory. https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/
    • This is a global inventory of ethical guidelines for Artificial Intelligence (AI). The authors find that the absence of internal enforcement or governance mechanisms shows that many companies are merely “virtue signaling” with their guidelines. However, others can still try to hold the companies to account, be it the companies’ own employees, outside institutions like advocacy organizations, or academics.
  • Bietti, E. (2020). From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy. In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 210-219). Association for Computing Machinery.
    • This article addresses two related phenomena concerning AI ethics: (a) “ethics washing,” i.e., the self-serving exploitation of ethics discourse by technology companies; and (b) “ethics basing,” i.e., the criticism and trivialization of ethical discourse by social scientists. The article rejects both these approaches and contends that ethics and moral philosophy have important roles to play in shaping AI policy.
  • Burkell, J., & Bailey, J. (2018). Unlawful distinctions? Canadian human rights law and algorithmic bias. Canadian Yearbook of Human Rights, 2, 217-230.
    • This article examines the relationship between algorithmic discrimination and Canadian human rights law. Highlighting the potential discriminatory impact of AI in employment contexts, the provision of public services, and elsewhere, the paper illustrates how harms arising primarily from statistical correlations pose challenges for the application of human rights law.
  • Casanovas, P., et al. (2019). The middle-out approach: Assessing models of legal governance in data protection, artificial intelligence and the Web of Data. The Theory and Practice of Legislation, 7(1), 1-25. 
    • This paper focuses on what lies between top-down and bottom-up approaches to governance and regulation, namely the middle-out interface that is typically associated with forms of co-regulation. From a methodological viewpoint, this paper examines the middle-out approach in order to shed light on three different kinds of issues: (i) how to strike a balance between multiple regulatory systems; (ii) how to align primary and secondary rules of the law; and (iii) how to properly coordinate bottom-up and top-down policy choices. The paper argues that increasing complexity of technological regulation recommends new models of governance that revolve around this middle-out analytical ground. 
  • Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Mathematical, Physical and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.0080
    • This paper is the introduction to the special issue entitled: ‘Governing artificial intelligence: ethical, legal and technical opportunities and challenges’. The issue addresses how AI can be designed and governed to be accountable, fair and transparent. Eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems.
  • Council of Europe Consultative Committee on the Convention for the Protection of Individuals with Regard to Automating Processing of Personal Data. (2019).* Guidelines on artificial intelligence and data protection. https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8
    • These guidelines, created by the Council of Europe, provide a set of baseline measures that governments, AI developers, manufacturers, and service providers should follow to ensure that AI applications do not undermine the human dignity, human rights, and fundamental freedoms of every individual. These guidelines have a particular focus on the right to data protection.
  • Donahoe, E., & Metzger, M. (2019). Artificial intelligence and human rights. Journal of Democracy, 30(2), 115-126.
    • This article argues for a global governance framework to address the wide range of societal challenges associated with AI, including threats to privacy, information access, and the right to equal protection and nondiscrimination. Rather than working to develop new frameworks from scratch, we argue that the challenges associated with AI can best be confronted by drawing on the existing international human-rights framework.
  • Ghallab, M. (2019). Responsible AI: Requirements and challenges. AI Perspectives, 1(1), 1-7. 
    • This paper discusses the requirements and challenges for responsible AI with respect to two interdependent objectives: (1) how to foster research and development efforts toward socially beneficial applications, and (2) how to take into account and mitigate the human and social risks of AI systems.
  • Hildebrandt, M. (2015).* Smart technologies and the end(s) of law. Edward Elgar.
    • This book highlights how the pervasive employment of machine-learning technologies that inform so-called ‘data-driven agency’ threaten privacy, identity, autonomy, non-discrimination, due process and the presumption of innocence. The author argues that smart technologies undermine, reconfigure and overrule the ends of the law in a constitutional democracy, jeopardizing law as an instrument of justice, legal certainty and the public good. However, the author calls on lawyers, computer scientists and civil society not to reject smart technologies, arguing that further engaging with these technologies may help to reinvent the effective protection of the rule of law.
  • Hoffmann-Riem, W. (2020). Artificial intelligence as a challenge for law and regulation. In Regulating artificial intelligence (pp. 1-29). Springer.
    • This chapter of Regulating Artificial Intelligence explores the types of rules and regulations that are currently available to regulate AI, while emphasizing that it is not enough to trust that companies that use AI will adhere to ethical principles. Rather, supplementary legal rules are needed, as company self-regulation is insufficient to promote ethical use of AI. The chapter concludes by stressing the need for transnational agreements and institutions in this area.
  • Hopkins, A. (2012).* Explaining “Safety Case.” (Regulatory Institutions Network Working Paper 87). https://www.csb.gov/assets/1/7/workingpaper_87.pdf
    • This paper emphasizes features of safety case regimes that are sometimes taken for granted in the jurisdictions where they operate and sets out a model of what might be described as a mature safety case regime. There are five basic features of safety case regimes that are highlighted in this paper: a risk- or hazard-management framework, a requirement to make the case to the regulator, a competent and independent regulator, workforce involvement, and a general duty of care imposed on the operator. 
  • Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
    • This paper’s analyses the content of a collection of 84 documents containing AI ethics principles and guidelines. The results of the analysis suggest that there is a global convergence around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility, and privacy), yet divergence in relation to the interpretation, importance, and implementation of these principles.
  • Kloza, D., et al. (2017).* Data protection impact assessments in the European Union: Complementing the new legal framework towards a more robust protection of individuals. Brussels Laboratory for Data Protection & Privacy Impact Assessments.
    • This policy brief provides recommendations for the European Union (EU) to complement the requirement for data protection impact assessment (DPIA), as set forth in the General Data Protection Regulation (GDPR), with a view of achieving a more robust protection of personal data. The policy brief attempts to draft a best practice for a generic type of impact assessment to remedy weak points in the DPIA requirement. The brief also provides background information on impact assessments as such: definition, historical overview, and their merits and drawbacks, and concludes by offering recommendations for complementing the DPIA requirement in the GDPR.
  • Latonero, M. (2018). Governing artificial intelligence: Upholding human rights & dignity. Data & Society. https://datasociety.net/library/governing-artificial-intelligence/
    • This report considers how human rights can guide the development of AI technologies and policy. The report frames the potential risks and harms of AI within human rights norms, including nondiscrimination, equality, political participation, privacy, freedom of expression, and accessibility. It also explores the intersection of AI and human rights among stakeholders in business, government, intergovernmental organizations, civil society, and academia.
  • Mantalero, A. (2018).* AI and data protection, challenges and possible remedies. Council of Europe. https://rm.coe.int/artificial-intelligence-and-data-protection-challenges-and-possible-re/168091f8a6
    • This report examines the current landscape of AI regulation and data protection, and argues that  it is important to extend European regulatory leadership in the field of data protection to a value-oriented regulation of AI based on the following three precepts: values-based approach (encompassing social and ethical values), risk assessment and management and participation. 
  • McGregor, L. (2018). Accountability for governance choices in artificial intelligence: Afterword to Eyal Benvenisti’s foreword. European Journal of International Law, 29(4), 1079-1085.
    • This paper argues that if the ‘culture of accountability’ is to adapt to the challenges posed by new and emerging technologies, the focus cannot only be technology-led. It further argues that a culture of accountability must also be interrogative of the governance choices that are made within organizations, particularly those vested with public functions at the international and national level. 
  • Molnar, P. (2019). Technology on the margins: AI and global migration management from a human rights perspective. Cambridge International Law Journal, 8(2), 305-330. https://doi.org/10.4337/cilj.2019.02.07
    • This article describes the ways in which AI deployed in migration contexts can violate human rights. The article contends that the lack of applicable regulation is deliberate, as states seek to use migration as a testing ground for high-risk technologies. In light of these observations, the article concludes that a global accountability framework is necessary to mitigate these harms.
  • Murray, D. (2020). Using human rights law to inform states’ decisions to deploy AI. AJIL Unbound, 114, 158-162. https://doi.org/10.1017/aju.2020.30
    • This essay explores the challenges involved in applying human rights law to states’ decisions to deploy AI technologies. Using the case study of live facial recognition, the essay identifies several steps that states should take when deciding whether or not to deploy AI tools in order to ensure compliance with human rights law and norms. 
  • Nemitz, P. (2018).* Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089.
    • This paper describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and functioning markets. It then recalls the experience with the lawless Internet and the relationship between technology and the law as it has developed in the Internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws. 
  • Raso, F. A., et al. (2018).* Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center for Internet & Society Research Publication. http://nrs.harvard.edu/urn-3:HUL.InstRepos:38021439
    • This report advances the emerging conversation on AI and human rights by evaluating the human rights impacts of six current uses of AI. The report’s framework recognizes that AI systems are not being deployed against a blank slate, but rather against the backdrop of social conditions that have complex pre-existing human rights impacts of their own.
  • Rieke, A., et al. (2018).* Public scrutiny of automated decisions: Early lessons and emerging methods. Upturn and Omidyar Network. https://www.omidyar.com/insights/public-scrutiny-automated-decisions-early-lessons-and-emerging-methods
    • This report maps out the landscape of public scrutiny of automated decision-making, both in terms of what civil society was or was not doing in this nascent sector and what laws and regulations were or were not in place to help regulate it. The report is based on extensive review of computer and social science literature, a broad array of real-world attempts to study automated systems, and dozens of conversations with global digital rights advocates, regulators, technologists, and industry representatives. 
  • Risse, M. (2019). Human rights and artificial intelligence: An urgently needed agenda. Human Rights Quarterly, 41(1), 1-16.
    • This article reviews short, medium, and long-term challenges for human rights posed by AI. It argues that among the short-term challenges are ways in which technology engages just about all rights on the UDHR, as exemplified through use of effectively discriminatory algorithms. It further asserts that medium-term challenges include changes in the nature of work that could call into question many people’s status as participants in society, and in the long-term humans may have to live with machines that are intellectually and possibly morally superior, even though this is highly speculative. 
  • Shackelford, S., et al. (2021). Should we trust a black box to safeguard human rights? A comparative analysis of AI governance. UCLA Journal of International Law and Foreign Affairs (forthcoming).
    • This article analyzes more than 40 AI strategy documents of national governments. The findings suggest that states’ AI practices are converging around several specific principles, including human-centered design and public benefit. The article contends that such convergence signals the possibility of deepening international engagement in developing and promoting AI policy.
  • Smuha, N. A. (2020). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. http://dx.doi.org/10.2139/ssrn.3543112
    • This paper argues that, without elucidating the applicability and enforceability of human rights in the context of AI; adopting legal rules that concretize those rights where appropriate; enhancing existing enforcement mechanisms; and securing an underlying societal infrastructure that enables human rights in the first place, any human rights-based governance framework for AI risks falling short of its purpose.
  • Truby, J. (2020). Governing artificial intelligence to benefit the UN Sustainable Development Goals. Sustainable Development. https://doi.org/10.1002/sd.2048
    • This article proposes effective preemptive regulatory options to minimize scenarios of Artificial Intelligence (AI) damaging the U.N.’s Sustainable Development Goals. It explores internationally accepted principles of AI governance, and argues for their implementation as regulatory requirements governing AI developers and coders, with compliance verified through algorithmic auditing. The article argues that proactively predicting such problems can enable continued AI innovation through well‐designed regulations adhering to international principles. 
  • Vestby, A., & Vestby, J. (2019). Machine learning and the police: Asking the right questions. Policing: A Journal of Policy and Practice. https://doi.org/10.1093/police/paz035
    • The article argues that important issues concerning Machine Learning (ML) decision models can be unveiled without detailed knowledge about the learning algorithm, empowering non-ML experts and stakeholders in debates over if, and how to, include them, for example, in the form of predictive policing. Non-ML experts can, and should, review ML models. We provide a ‘toolbox’ of questions about three elements of a decision model that can be fruitfully scrutinized by non-ML experts: the learning data, the learning goal, and constructivism. 
  • Yeung, K. (2018).* A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. Council of Europe. https://ssrn.com/abstract=3286027
    • This report examines the implications of digital technologies for the concept of responsibility, investigating where responsibility should lie for their adverse consequences. The study explores (a) how human rights and fundamental freedoms protected under the European Convention on Human Rights may be adversely affected by the development of AI technologies and (b) how responsibility for those risks and consequences should be allocated. 

Chapter 5. The Incompatible Incentives of Private Sector AI (Tom Slee)⬆︎ 

  • Bakir, V., & McStay, A. (2018). Fake news and the economy of emotions: Problems, causes, solutions. Digital Journalism, 6(2), 154-175. https://doi.org/10.1080/21670811.2017.1345645
    • This article conducts a qualitative analysis of Facebook posts submitted by the Breitbart organization, which is further supplemented by interviews of technologists, journalists, and firms during the 2017 South-by-Southwest event. It argues that fake news is a symptom of the rise of empathic media, or media designed to manipulate emotions, through algorithmic journalism. The authors recommend that the digital advertising industry be scrutinized for enabling misinformation through these techniques.
  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review 104(3), 671-732. https://dx.doi.org/10.2139/ssrn.2477899
    • This seminal article uses American antidiscrimination law to argue the importance of disparate impact doctrine when considering the effects of big data algorithms. It advocates for a paradigm shift in antidiscrimination law as the nature of these algorithms calls into question what “fairness” and “discrimination” mean in the digital age. The ideas conveyed in this article reflect the growing movement around fairness, accountability, and transparency in the machine learning community. 
  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. John Wiley & Sons.
    • This book examines the modern-day relevance of the “Jim Crow” laws that enforced racial segregation in the Southern United States. It argues that emerging technologies such as artificial intelligence can deepen inequities by “explicitly amplifying racial hierarchies,” even when they may seem neutral or benevolent at first glance. 
  • Blasimme, A., et al. (2019). Big data, precision medicine and private insurance: A delicate balancing act. Big Data & Society, 6(1). https://doi.org/10.1177%2F2053951719830111
    • Using national precision medicine initiatives as a case study, this article explores the tension between private insurers leveraging repositories of genetic and phenotypic data for economic gain and the utility of these databases as a public, scientific resource. Although the authors admit that information asymmetry between insurance companies and their policyholders still leads to risks in reduced research participation, adverse selection, and discrimination, they argue that a governance model underpinned by trustworthiness, openness, and evidence can balance these competing interests. 
  • Bowker, G. C., & Star, S. L. (2000).* Sorting things out: Classification and its consequences. MIT Press.
    • Classification, or the process of grouping something according to shared qualities or characteristics, is a foundational division of machine learning problems. This book examines how classification, as an information infrastructure, has shaped human society from social, moral, and political standpoints. The authors draw numerous examples from health and medicine (e.g. the International Classification of Diseases, classification of viruses) but also dedicate a chapter to racial classification during Apartheid.
  • Bucher, T. (2018). If… Then: Algorithmic power and politics. Oxford University Press. http://dx.doi.org/10.1093/oso/9780190493028.001.0001
    • This book outlines how algorithms enter our social fabric and then act as political agents to “shape social and cultural life.” The author articulates her key contributions as: (1) offering a new ontology for algorithms, (2) identifying various forms of algorithm power and politics, and (3) providing a theoretical framework for the actions of algorithms. 
  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81, 77-91. http://proceedings.mlr.press/v81/buolamwini18a.html
    • This paper finds that existing benchmarks used for facial recognition and AI research are composed of a majority of lighter-skinned subjects. The authors propose an alternative with a balanced sample of different skin tones, and audits three commercial gender classifiers for faces. Performance is shown to be significantly worse for darker-skinned females, whereas all classifiers performed best for lighter-skinned males. These results illustrate the substantial racial disparities in algorithms that are actively deployed for automatic classification.
  • Calo, R., & Rosenblat, A. (2017). The taking economy: Uber, information, and power. Columbia Law Review, 117(6), 1623-1690.
    • Technology companies such as Uber and AirBnB have popularized the “sharing economy,” where goods and services are shared between private individuals over the internet. This article argues that asymmetries of information and power are fundamental to understanding and critiquing the sharing economy. For an effective legal response to prevent these companies from abusing their users, the authors claim that regulators must gain insight into the how digital data is manipulated and remove the incentives for abusing these asymmetries. 
  • Crawford, K. et al. (2019). AI Now 2019 Report. AI Now Institute at New York University. https://ainowinstitute.org
    • This report is part of an annual collaboration between researchers in both academia and industry on the implications of AI. It describes key recommendations for safeguarding against the deployment of high-risk, potentially harmful algorithms through government regulation and industry initiatives. It also summarizes the most significant publications in the areas of AI fairness, accountability, and transparency, and contextualizes this work against contemporary issues like climate change.
  • Espeland, W. N., & Sauder, M. (2016).* Engines of anxiety: Academic rankings, reputation, and accountability. Russell Sage Foundation.
    • Goodhart’s Law states that: “When a measure becomes a target, it ceases to be a good measure.” This book explores how the rankings of United States law schools has profoundly shaped legal education through the creation of an all-defining hierarchy. Through the analysis of observational data and interviews with members of the legal profession, the authors have revealed that in the pursuit of maximizing their rankings, law schools have negatively impacted their students, educators, and administrators.
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
    • This book investigates how big data algorithms are systematically used to oppress the poor in the United States. The author’s approach is that of a storyteller, taking readers into the lives of individuals as they are “profiled, policed, and punished.” Social justice is central to the book’s argument as it advocates not for the feckless application of technology, but rather a deep, humane commitment to the eradication of poverty.
  • Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
    • This book provides a cultural analysis of major events on the Internet, revealing how Big Tech moderates online content. It argues that, instead of the devolved, democratic space for social participation it was originally envisioned to be, the Web has become consumed by corporate agendas that shape online discourse. The author proposes that the debate over online platforms move towards a more critical discussion of the Web’s structural issues, as opposed to focusing on individual controversies. 
  • Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
    • This book explores the origins and ramifications of the “ghost work” employed by Big Tech corporations. In order to support the operation of their vast online platforms and services, these corporations use a hidden labor force to perform crowdsourced microtasks such as data labeling, content moderation, and service fine-tuning. Employment through ghost work, the authors argue, arises paradoxically out of the development of AI-based automation that otherwise threatens traditional labor. In turn, growing concerns about this new underclass of workers need to be addressed, such as accountability, trust, and insufficient regulation of on-demand work.
  • Harcourt, B. E. (2008).* Against prediction: Profiling, policing, and punishing in an actuarial age. University of Chicago Press.
    • Actuarial science involves the application of mathematics and statistics to assess and manage risk. This book challenges the successfulness attributed to actuarial methods in criminal justice and argues they have instead warped the notion of “just punishment” and make life more difficult for the poor and marginalized. 
  • Holstein, K. et al. (2019). Improving fairness in machine learning systems: What do industry practitioners need? In S. Brewster & G. Fitzpatrick (Eds.), Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-16). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300830
    • Seeking to evaluate how private-sector AI practitioners address fairness in machine learning, this paper conducts surveys and qualitative studies of experts in the industry. It finds that the real-world demands of developing fair AI algorithms are often misaligned with the research on fairness in machine learning. These difficulties are exacerbated by the multiple technical and organizational barriers impeding fairer sociotechnical systems. For example, the fairness literature focuses largely on designing unbiased algorithms in artificial, isolated tasks, whereas practitioners need support in curating datasets for use in rich, nuanced contexts. 
  • Jacobs, J. (1961).* The death and life of great American cities. Random House.
    • This book is a critique of urban planning the 1950s, arguing that problematic policy is to blame for the decline of neighborhoods across the United States. In the author’s view, cities take on a life akin to a biological organism where a healthy city is one characterized by diversity, a sense of community, and thriving streets that draw habitants into cafes, restaurants, and other places of gathering. The author contrasts the healthy city with government housing projects to demonstrate the separation of the haves and have nots, a trend that is now being automated with big data and machine learning algorithms.
  • Khan, L. M. (2016). Amazon’s antitrust paradox. The Yale Law Journal, 126(3), 710-805.
    • Antitrust laws exist to protect consumers from predatory or monopolistic business practices. The author argues current antitrust laws fail to capture the reality of Amazon’s position as a digital platform because Amazon: (1) is incentivized to pursue growth over profit and (2) controls the infrastructure that enables its rivals to function. 
  • Kramer, A. D. et al. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788-8790. https://doi.org/10.1073/pnas.1320040111
    • In this article, researchers affiliated with Facebook detail a large-scale online experiment conducted on hundreds of thousands of users on the platform. By manipulating information available to users via the Facebook News Feed, they find evidence of emotional contagion through changes in users’ sharing behaviors. This paper received international publicity due to its controversial methodology, and its revelation of large-scale experiments run by private-sector online platforms.
  • MacKenzie, D. (2007).* An engine, not a camera: How financial models shape markets. MIT Press).
    • This book combines concepts from finance, sociology, and economics to argue that economic models not only capture trends about markets but rather shape them. The author contextualizes his argument through examples of financial crises that occurred in 1987 and 1998, although parallels can also be made to the 2007 subprime mortgage crisis. These concepts from economic models naturally can be extended to algorithms.
  • Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329-346. https://doi.org/10.1177/1461444815608807
    • This article discusses toxicity in online communities through an ethnographic study of the Reddit platform. Specifically, it considers two instances of misogynist, anti-social behavior in Reddit subgroups that resulted in the systematic harassment of women. The author argues that the platform’s algorithmic content ranking and hands-off moderation rules come together to provide fertile ground for toxic cultures to flourish.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
    • This book critiques the notion that search engines equally promote “all ideas, identities, and activities” and argues that they rather serve as a platform for racism and sexism. It stresses that results provided by Google, Bing, or other engines are not neutral but rather “reflect the political, social, and cultural values of the society [they] operate within.” In latter chapters, the author extends her argument to the broader work conducted by professionals in library and information science. 
  • Obermeyer, Z, et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342
    • This article empirically analyzes a widely adopted algorithm used for predicting patient risk and allocating care in hospitals. It finds that the algorithm is systematically biased against Black patients and suggests that one source of this flaw is the algorithm’s use of health care costs as a proxy for patient health. Due to latent biases in health data like reduced spending on care for Black patients, this article demonstrates that algorithms are especially prone to racial disparities when used in high-impact scenarios.
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
    • This book takes a wide survey of how big data algorithms affect society and draws examples from education, advertising, criminal justice, employment, and finance. The author places special emphasis on drawing attention to areas of society where it is not immediately clear that algorithms are making decisions. The three characteristics of a “Weapon of Math Destruction” include: (1) scale, (2) secrecy, and (3) destructiveness. 
  • Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
    • This book draws attention to the secrecy and complexity of algorithms being used on Wall Street and in Silicon Valley. The author also argues that demanding transparency is only part of the solution, and that the decisions of these algorithms must be held to the standards of fairness, non-discrimination, and openness to criticism.
  • Raghavan, M., et al. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 469-481). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372828
    • In response to the increasing amount of public scrutiny on the use of algorithmic tools in private sector hiring, this paper conducts a qualitative survey of vendors providing algorithmic solutions for employee assessment. It identifies the features analyzed by the vendors such as video recordings, how the vendors claim to have validated their results, and whether fairness is considered. The authors conclude with policy and technical recommendations for ensuring more effective, appropriate, and fair algorithmic hiring practices.
  • Rosenblat, A. (2018). Uberland: How algorithms are rewriting the rules of work. University of California Press.
    • This book takes an ethnographic approach to unveil how Uber asserts control over its drivers and has also shaped the dialogue in areas such as sexual harassment and racial equity. Through interviews with drivers across the United States and Canada, the author grapples with ideas such as freedom, independence, and flexibility touted by the company while also illuminating its pervasive surveillance and information asymmetries.
  • Schelling, T. C. (1978).* Micromotives and macrobehavior. WW Norton & Company. 
    • This book expands on the idea of the “tipping point” first proposed by Morton Grodzins. The tipping point refers to when a group rapidly adopts a previously rare, and seemingly unimportant practice and undergoes a rapid, significant change as a result. A major theme in this book is “social sorting”, such as when neighborhoods cluster by race due to the preference of inhabitants to live around people that look like themselves.
  • Scott, J. C. (1998).* Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press.
    • This book offers a critique of the top-down social planning done by states around the world and insights into why they fail. Four conditions common to failed social planning initiatives include: (1) an attempt to impose order on society and nature, (2) a belief that science can improve all aspects of life, (3) willingness to resort to authoritarianism, and (4) a helpless civil society.
  • Sweeney, L. (2013). Discrimination in online ad delivery. Communications of the ACM, 56(5), 44-54. http://dx.doi.org/10.2139/ssrn.2208240
    • In this study, Sweeney presents a quantitative investigation of the online advertisements recommended by Google’s Adsense when searching for different racially associated names in 2012. She finds that searches for names associated with Black babies, including her own, almost always yielded ads suggestive of an arrest. This occurred regardless of whether individual names were attached to an actual arrest record. In contrast, far fewer ads generated for White-identifying names suggested criminality or arrest. 
  • Sunstein, C. R. (2018). #Republic: Divided democracy in the age of social media. Princeton University Press.
    • In this book, a founding scholar of nudge theory analyzes the risks of large, pervasive online platforms driven by personalization algorithms. He argues that the major social dangers of the Internet lie in its enabling of self-insulation through filter bubbles and echo chambers, which in turn poses threats to democratic institutions. He proposes potential regulatory and design changes to reduce polarization and improve deliberation online through uncertainty. For example, platforms can help users explore opposing viewpoints by implementing randomization features like a serendipity button, in contrast to highly tailored recommendations.
  • Wachter, R. M., & Cassel, C. K. (2020). Sharing health care data with digital giants: Overcoming obstacles and reaping benefits while protecting patients. JAMA, 323(6), 507-508.
    • In response to the steady stream of news updates around the entry and involvement of the major technology companies (e.g. Google, Apple, Amazon) into healthcare, this commentary proposes ideals for a collaborative path forward. It emphasizes transparency (especially around financial disclosures and conflicts of interest), direct consultation with patients/patient advocacy groups, and data security.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
    • This book draws a common thread between digital technology companies by arguing that they engage in “surveillance capitalism.” Surveillance capitalists provide free services for behavioral data, which are then used to create “prediction products” of future consumer behaviour. These products are then traded in “behavioral futures markets,” which generates large amounts of wealth for surveillance capitalists. The author argues that surveillance capitalism is becoming a dominating force in not just economics, but society as a whole.

Chapter 6. Normative Modes: Codes and Standards (Paula Boddington)⬆︎ 

  • Arrieta, A. B., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
    • This paper aims to shed light on eXplainable AI (XAI), as this field is still gaining momentum through its focus on the practical deployment of artificial intelligence (AI) modes. The authors summarize explainability in the realm of machine learning, and what benefits are associated with AI activity sectors and their different normative tasks. Their goal is to help lay-persons understand the future research progressions in AI and XAI. Overall, they discuss the taxonomy on contributions related to explainability of different machine learning models, and the challenges that XAI considers.
  • Atkinson, P. (2009).* Ethics and ethnography. Twenty-First Century Society, 4(1), 17-30. http://doi.org/10.1080/17450140802648439
    • This paper, drawing on previous work, concentrates solely on how ethnographic research is conducted. Atkinson argues that the field lacks development, specifically as it relates to sociology and anthropology. Field research in ethnography has practical challenges for regulation, exposing an insufficient understanding of social life embedded into today’s regulatory regimes. 
  • Balfour, D., et al. (2014).* Unmasking administrative evil. Routledge.
    • This book argues that there is a deep-seated administrative evil present into the state of public affairs, resulting in crimes against humanity such as genocide. By performing duties in line with their occupation, agents can not only disregard their participation in this administrative evil, but also can suffer for moral inversion: participating in evil yet believing what they are doing to be morally good. 
  • Banja, J., et al. (2021). Sharing and selling images: Ethical and regulatory considerations for radiologists. Journal of the American College of Radiology, 18(2), 298-304. 
    • This article covers the regulatory standards and ethical perspectives relevant to current data agreements, specifically those concerning data holders and how they uphold ethical and regulatory standards. The authors discuss four ways to address data sharing or selling arrangements specific to radiology. They examine “big data” systems and present the ethical and regulatory implications of sharing and selling images in radiology.
  • Baumer, D. L., et al. (2004).* Internet privacy law: A comparison between the United States and the European Union. Computers & Security, 23(5), 400-412. https://doi.org/10.1016/j.cose.2003.11.001
    • This article compares privacy law in the United States to privacy law in the European Union, examining these laws as they relate to the regulation of websites and online service providers. A central issue to regulation is that privacy laws and practices vary by region, however the Internet is world-wide. 
  • Benkler, Y. (2019). Don’t let industry write the rules for AI. Nature, 569(7754), 161-162. doi.org/10.1038/d41586-019-01413-1
    • This article argues that technology companies seek to influence AI regulation for the benefit of their companies. To combat this, Benkler argues that governments need to use leverage to limit company influence on policy.
  • Boddington, P. (2017).* Towards a Code of Ethics for Artificial Intelligence. Springer.
    • This book works toward understanding the task of producing ethical codes and regulations in the rapidly advancing field of artificial intelligence, examining ethical and practical issues in the development of these codes. Boddington’s book creates a resource for those who wish to address the ethical challenges of artificial intelligence research. 
  • Castelvecchi, D. (2021). Prestigious AI meeting takes steps to improve ethics of research. Nature, 589(7840), 12-13.
    • This article notes how artificial intelligence (AI) research is subject to ethical scrutiny. The Neural Information Processing Systems (NeurIPS) conference demonstrates an increasing focus on harmful uses of AI technologies and how the AI community is increasingly aware of these consequences. Importantly, these meetings included conversations about policing AI and how ethical thinking should be the foundation of machine learning technologies. 
  • Cath, C., et al. (2018). Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach. Science and Engineering Ethics, 24, 505–528. https://doi.org/10.1007/s11948-017-9901-7
    • This article compares three reports published in October 2016 by the White House, the European Parliament, and the United Kingdom House of Commons on how to prepare society for the emergence of AI. This article uses these reports to provide a framework for developing good AI policy. The authors argue that these reports fail to express a long-term strategy for developing a good AI society, and concludes with a two-pronged solution to fill this gap. 
  • Gunning, D. (2017).* Explainable Artificial Intelligence (XAI). Defense Advanced Research Projects Agency, DARPA/I20.
    • This presentation outlines that need for user-friendly artificial intelligence, wherein users can understand, trust, and administer these entities. Current AI systems, while extremely useful, have greatly diminished effectiveness as the machines do not often explain their actions to users.   
  • Gunning, D., & Aha, D. (2019). DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine40(2), 44-58. https://doi.org/10.1609/aimag.v40i2.2850
    • This article provides a detailed look into DARPA’s four-year explainable artificial intelligence (XAI) program. The XAI program aimed to develop AI systems whose operations can be understood and trusted by the user. 
  • Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds & Machines 30, 99–120. https://doi.org/10.1007/s11023-020-09517-8
    • This article performs a semi-systematic analysis and comparison of 22 Ethical AI guidelines, highlighting omissions and well as commonalities. Hagendorff also examines how these ethical principles are implemented in research and creation of AI systems, and how this application can be improved. 
  • House of Lords Select Committee on Artificial Intelligence. (2018).* AI in the UK: Ready, willing and able? Report of First Session 2017-19. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf 
    • This report considers the ethical, societal, and economic implications of the development of AI, concluding the United Kingdom has the potential to be a global leader in the field. The Select Committee on Artificial Intelligence finds that AI can potentially solve complex problems and improve productivity, and that potential risks can be mitigated.  
  • Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
    • In recent years, private companies, academic institutions, and governments have created principles and ethical codes for artificial intelligence. Despite consensus that AI must be ethical, there is no widespread agreement about the requirements of ethical AI. This article maps and analyzes current ethical principles and codes as they relate to AI.
  • Kroll, J. A. (2021). Outlining traceability: A principle for operationalizing accountability in computing systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 758-771). Association for Computing Machinery. 
    • This paper examines how accountability can be accomplished by computing systems looking at what standards are needed for governable artificial intelligence (AI) and its traceability. It examines how the principles of traceability and accountability could be better articulated in AI standards and principles so that software systems are governed by systematic technologies. In sum, the paper explains how traceability can be preserved in AI systems to ultimately advance normative fidelity systems and processes.
  • Lee, S. S. (2021). Philosophical evaluation of the conceptualisation of trust in the NHS’ code of conduct for artificial intelligence-driven technology. Journal of Medical Ethics, 1–6. https://doi.org/10.1136/medethics-2020-106905
    • This pre-print version of a journal article focuses on the United Kingdom’s Government Code of Conduct on data technologies in health care, specifically artificial intelligent (AI) technologies. The article aims to evaluate the notion of trust in these AI technologies and the ethical implications of their use for health care systems. The author urges the Code of Conduct to emphasize the notion of value-based trust in terms of these AI technologies. 
  • London, A. J. (2019). Artificial intelligence and black‐box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1), 15–21. https://doi.org/10.1002/hast.973
    • This article investigates algorithmic decision-making processes in the context of new medicine. This article explains how breakthroughs in machine learning can accelerate the development of artificial intelligent (AI) technologies by helping researchers to address the standards and codes in some of the most powerful machine learning technologies.
  • Martin, A., & Freeland, S. (2021). The advent of artificial intelligence in space activities: New legal challenges. Space Policy, 55(3). https://doi.org/10.1016/j.spacepol.2020.101408
    • This paper explores the development of artificial intelligence (AI) autonomous systems and presents the ethical and legal challenges posed by these technologies. The authors discuss how AI has implications for important social, economic, technological, legal, and ethical issues that need to be addressed. The authors analyze AI in the context of space systems and showcase the legal issues present in the deployment of AI-based autonomized systems. 
  • Metzinger, T. (2018). Towards a global artificial intelligence charter. In European Parliament Research Service (Ed.), Should we fear artificial intelligence? (pp. 27–33).
    • Metzinger argues that the debate in the public sphere on artificial intelligence must move into political institutions. These institutions must produce a set of ethical and legal constraints on the development and use of AI that are sufficient while remaining minimally intrusive. Metzinger provides a list of the five more important problem domains in the field of AI ethics, and gives recommendations for each. 
  • Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review79(119), 119-158.
    • This article argues that conventional theoretical approaches to privacy employed for common privacy concerns are not sufficient to yield appropriate conclusions in light of the development of public surveillance. Nissenbaum argues for a new construct, contextual integrity, that will act as a replacement for traditional theories of privacy. 
  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019).* Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems IEEE. https://ethicsinaction.ieee.org
    • This treatise is a globally crowdsourced, collaborative source based on a previous call for input and two hundred pages of feedback. The treatise aims to provide practical insights, and to act as a reference work for professionals involved in the ethics of artificial intelligence. Included in the treatise are policy recommendations. 
  • Vinuesa, R., et al. (2020). The role of artificial intelligence in achieving the sustainable development goals. Nature Communications, 11(1), 233. https://doi.org/10.1038/s41467-019-14108-y
    • This article implements a consensus-based expert elicitation process to evaluate artificial intelligence (AI) technologies and their achievements in terms of sustainable development goals. The authors highlight how current research overlooks important factors associated with these technologies. Overall, they argue that AI systems need to be supported by regulatory schemes to ensure that these technologies are meeting all ethical, transparency, and safety standards.
  • Weller, A. (2017). Challenges for transparency. In W. Samek, G. Montavon, A. Vedaldi, L. Hansen, & K. R. Müller (Eds.), Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (pp 23-40). Springer.
    • This chapter provides an overview of the concept of transparency, of which there are varying types, and whose definition varies based on context. It is difficult to determine objective criteria to measure transparency in light of this. Weller examines contexts wherein transparency can cause harm. 
  • Whittlestone, J., et al. (2019).* Ethical and societal implications of algorithms, data, and artificial intelligence: A roadmap for research. Nuffield Foundation. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf
    • This report acts as a roadmap for published work on the implications algorithms, data and AI (ADA) have for ethics and society.  There is no agreed upon ethical core or framework for issues relating to ADA, as even well-established issues such as bias, transparency and consent have different interpretations depending on context.
  • Whittlestone, J., et al. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 195-200). https://doi.org/10.1145/3306618.3314289
    • This article draws on comparisons within the field of bioethics to highlight limitations of principles applied to AI ethical guidelines, such as fairness, privacy, and autonomy. The authors argue that the field of AI ethics needs to progress to exploring tensions that exist within these established principles. They offer potential solutions to these tensions. 
  • Winfield, A. F., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2133). https://doi.org/10.1098/rsta.2018.0085
    • This paper examines ethical governance for artificial intelligence systems and robots. The authors argue that ethical governance is needed in order to create public trust in these new technologies. They conclude by proposing five pillars of effective ethical governance. 
  • Winfield, A. F., et al. (2019). Machine ethics: The design and governance of ethical AI and autonomous systems. Proceedings of the IEEE107(3), 509-517. 
    • This paper focuses on the fourth industrial revolution, which includes AI, and machine learning systems, as discussed in the 2016 World Economic Forum at Davos. It argues that the economic and societal implications surrounding the fourth industrial revolution are no longer only of concern to academics, but rather are important for politics and public debate. 
  • Zeng, Y., Lu, E., & Huangfu, C. (2018). Linking artificial intelligence principles. In Proceedings of the AAAI Workshop on Artificial Intelligence Safety (AAAI-Safe AI 2019).
    • In this article, the authors propose LAIP (Linking artificial intelligence principles) as a framework for analyzing various AI principles. Rather than adopting one pre-developed set of AI principles, the authors propose combining existing frameworks, allowing for interaction. 

Chapter 7. The Role of Professional Norms in the Governance of Artificial Intelligence (Urs Gasser and Carolyn Schmitt)⬆︎

  • Abbott, A. (1983).* Professional ethics. American Journal of Sociology, 88(5), 855-885. https://doi.org/10.1086/227762
    • Through comparative analysis, this paper establishes five basic properties of professional ethics codes: universal distribution, correlation with intra-professional status, enforcement dependent on visibility, individualism, and emphasis on college obligations. The paper then adds a third perspective, relating ethics directly to intra-and extra professional status. Finally, the authors analyze developments in professional ethics in America since 1900 thus specifying the interplay of the three processes hypothesized in the competing perspectives.
  • Anthony, K. H. (2001).* Designing for diversity: Gender, race, and ethnicity in the architectural profession. University of Illinois press.
    • This book argues that the traditional mismatch between diverse consumers and predominantly white male producers of the built environment, plus the shifting population balance toward communities of color, leads to the architectural profession’s lack of true diversity, at its own peril.
  • Bender, E. M., et al. (2021). On the dangers of stochastic parrots: Can language models be too big?  In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922
    • This paper discusses the recent technical norms in Natural Language Processing towards development and deployment of larger models, and points to their disproportionate impact on marginalized communities. The paper provides recommendations and a framework for approaching research and development goals by considering environmental, energy, and financial costs. Prior to publication, Google terminated the employment of authors Timnit Gebru and Margaret Mitchell. 
  • Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Springer International Publishing. https://doi.org/10.1007/978-3-319-60648-4
    • This book discusses the challenges in developing codes of ethics for artificial intelligence. The book introduces ethics in the context of AI and describes the distinctive ethical questions raised by these systems. It outlines common features of professional practice that lead to the necessity and development of professional ethics codes. It provides a description of various formats for codes of conduct, regulation, and guidance to address features common among professional settings and bodies that inform their development. The author also provides a review of some professional ethics codes and proposes developments to these professional codes for AI. They provide the institutional context for the historical development of professional codes of ethics and offer an approach for understanding and overcoming the challenges AI pose to development of professional ethics. 
  • Bynum, T. W., & Simon, R. (2004).* Computer ethics and professional responsibility. Wiley Blackwell. 
    • This book provides discussion of topics such as the history of computing; the social context of computing; methods of ethical analysis; professional responsibility and codes of ethics; computer security, risks and liabilities; computer crime, viruses and hacking; data protection and privacy; intellectual property and the “open source” movement; global ethics and the internet.
  • Dasgupta, N. (2011). Ingroup experts and peers as social vaccines who inoculate the self-concept: The stereotype inoculation model. Psychological Inquiry, 22(4), 231-246. https://doi.org/10.1080/1047840X.2011.607313
    • This paper argues that an individual’s choice can be subtly influenced by cues in the academic environment that leads to their inclusion or exclusion from that professional path. The paper uses the ‘stereotype inoculation model’ to explain this event. 
  • Davis, M. (2015).* Engineering as profession: Some methodological problems in its study. Engineering Identities, Epistemologies and Values (pp. 65-79). Springer.
    • This text considers engineering practice including contextual analyses of engineering identity, epistemologies, and values. It examines such issues as an engineering identity, engineering self-understandings enacted in the professional world, distinctive characters of engineering knowledge and how engineering science and engineering design interact in practice.
  • Evetts, J. (2003).* The sociological analysis of professionalism: Occupational change in the modern world. International Sociology, 18(2), 395-415. https://doi.org/10.1177%2F0268580903018002005
    • The paper explores the appeal of the concepts of profession and professionalism and the increased use of these concepts in different occupational groups, work contexts and social systems. It also considers how the balance between the normative and ideological elements of professionalism is played out differently in occupational groups in different employment situations.
  • Frankel, M. S. (1989).* Professional codes: Why, how, and with what impact? Journal of Business Ethics, 8(2-3), 109-115. https://doi.org/10.1007/BF00382575
    • This paper argues that a tension between a professions’ pursuit of autonomy and the public’s demand for accountability has led to the development of codes of ethics which act as both foundations and guides for professional conduct in the face of morally ambiguous situations. Three types of codes are identified in the paper — aspirational, educational, and regulatory. 
  • Greene, D., et al. (2019).* Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences (pp. 2122 – 2131). https://hdl.handle.net/10125/59651
    • This paper argues vision statements for ethical artificial intelligence and machine learning (AI/ML) co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work. This argument is developed using frame analysis to examine recent high-profile values statements endorsing ethical design for AI/ML.
  • Husted, B. W., & Allen, D. B. (2000). Is it ethical to use ethics as strategy? In J. Sójka & J. Wempe (Eds.), Business challenging business ethics: New instruments for coping with diversity in international business (pp. 21-31). Springer.
    • This article seeks to define a strategy concept in order to situate the different approaches to the strategic use of ethics and social responsibility found in the current literature. The authors then analyze the ethics of such approaches using both utilitarianism and deontology and end by defining limits to the strategic use of ethics.
  • IEEE Global Initiative (2018).* Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems.
    • This paper analyses Google’s Duplex: a computer‐based system with natural language capabilities that provides a human sounding conversation as it performs a set of tasks, and some of the initial reaction to the system and its capabilities. The authors use the applications and characteristics of Duplex to investigate the ethics of pretending to be human and suggest that such impersonation is against evolving computer codes of ethics.
  • Johnson, A. M., Jr. (1997).* The underrepresentation of minorities in the legal profession: A critical race theorist’s perspective. Michigan Law Review, 95(4), 1005-1062. https://doi.org/10.2307/1290052
    • This article discusses the import of the development of Critical Race Theory for the legal profession and larger society and seeks to explore whether Critical Race Theory can have a positive or any effect for those outside legal academia. 
  • Johnson, A. M., Jr. (2006). The destruction of the holistic approach to admissions: The pernicious effects of rankings. Indiana Law Journal, 81(1), 309-358.
    • This article argues that achieving racial and ethnic diversity in the student body of a law school is a laudable and productive end which all law schools and institutions of higher education should seek to achieve. It is written from the perspective that achieving a diverse student body is a positive goal and one that can and should be accomplished through the use of affirmative action.
  • Leslie, D., & Catungal, J. P. (2012). Social justice and the creative city: Class, gender and racial inequalities. Geography Compass, 6(3), 111-122. https://doi.org/10.1111/j.1749-8198.2011.00472.x
    • This paper argues that gender and racial equality is at stake in the creative city. Continuing class inequality is maintained and exacerbated as a result of creativity‐led urban economic development policies.
  • Mattingly-Jordan, S., et al. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems glossary (1st ed. draft). Glossary Committee of The IEEE Global Initiative.
    • This glossary provides reference definitions for terms appearing in the IEEE Ethically Aligned Design document. The goal of this document is to serve as a shared resource for interdisciplinary teams to understand common terms which have discipline specific meaning. The glossary gives 6 definitions for terms by referencing usage in discipline categories: ordinary language; computational disciplines; economics and social science; engineering disciplines; philosophy and ethics; and international law and policy.
  • National Science and Technology Council Committee on Technology. (2016). White House report on the future of artificial intelligence. Executive Office of the President.
    • This report surveys technical developments and makes specific recommendations to the Obama White House on advances in Artificial Intelligence. It proposes directives to Federal government agencies and other bodies. It discusses the role the government plays in developing the workforce and gives additional policy recommendations for monitoring and supporting AI research. It identifies fairness, safety, accountability, and governance as primary concerns. 
  • Noordegraaf, M. (2007).* From “pure” to “hybrid” professionalism: Present-day professionalism in ambiguous public domains. Administration & Society, 39(6), 761-785. https://doi.org/10.1177%2F0095399707304434
    • This paper aims to answer the following questions: What is professionalism? What is professional control in ambiguous occupational domains? What happens when different types of occupational control get mixed up? It argues that the solution lies in portraying classic professionalism as “controlled content,” transitioning from “pure” to “hybrid” professionalism, and portraying present-day professionalism as “content of control” instead of controlled content.
  • Oz, E. (1993).* Ethical standards for computer professionals: A comparative analysis of four major codes. Journal of Business Ethics, 12(9), 709-726. https://doi.org/10.1007/BF00881385
    • This paper compares and evaluates the ethical codes of four major organizations of computer professionals in America. The authors analyze these ethical codes in context of the following obligations that every professional has: to society, to the employer, to clients, to colleagues, to the professional organization, and to the profession.
  • Panteli, A., et al. (1999).* Gender and professional ethics in the IT industry. Journal of Business Ethics, 22(1), 51-61. https://doi.org/10.1023/A:1006156102624
    • This paper discusses the ethical responsibility of the Information Technology (IT) industry towards its female workforce, particularly the representation of women. The paper presents evidence that the IT industry is not gender-neutral and that it does little to promote or retain its female workforce. Therefore, the authors urge that professional codes of ethics in IT should be revised to take into account the diverse needs of its staff.
  • Rhode, D. L. (1994).* Gender and professional roles. Fordham Law Review, 63(1), 39-72.
    • This article, informed by contemporary feminist jurisprudence, discusses the following two issues. First, the challenges to professional roles, relationships, and the delivery of services. Then, issues of gender bias in the workplace and women’s underrepresentation in positions of the greatest power, status, and reward. Both discussions build on values traditionally associated with women that are undervalued in traditionally male-dominated professions. 
  • Rhode, D. L. (1997). The professionalism problem. William & Mary Law Review, 39(2), 283-326.
    • This article argues that given the increasing discontent with the legal profession, particularly in the form of criticism directed at ethical practices that have widened the gap between professional ideals and professional work, the existence of and solution to competing values must be acknowledged and created, as these issues are too significant to continue unmediated. 
  • Shapiro, S. P. (1987). The social control of impersonal trust. American Journal of Sociology, 93(3), 623-658. https://doi.org/10.1086/228791
    • This paper discusses the ‘guardians of impersonal trust’ and discovers that they create new problems. Particularly, the resulting collection of procedural norms, structural constraints, entry restrictions, policing mechanisms, social-control specialists, and insurance-like arrangements which increase the opportunities for abuse while it encourages less acceptable trustee performance.
  • Standing, G. (2010). Work after globalization: Building occupational citizenship. Edward Elgar Publishing. 
    • In this book, the author seeks to shift emphasis from the role of capital to the creativity of labour in the creation of value in the real economy. A central role is accorded to each and all of the skills and occupations which contribute to the construction of an economy and a civic culture governed by the public interest. 
  • Stevens, B. (1994). An analysis of corporate ethical code studies: “Where do we go from here?” Journal of Business Ethics, 13(1), 63-69. https://doi.org/10.1007/BF00877156
    • This article seeks to differentiate between ethical codes, professional codes, and mission statements. Ethical code studies are then reviewed in terms of how codes are communicated to employees and whether implications for violating codes are discussed. Finally, the authors discuss how such codes are communicated and accepted, and their impact on employees.
  • Susskind, R. E., & Susskind, D. (2015). The future of the professions: How technology will transform the work of human experts. Oxford University Press.
    • This book discusses professions in the context of transformative technology. It gives historical examples of the development of ideas about professions. It details and relates eight professional domains: health; education; divinity; law; journalism; management consulting; tax and audit; and architecture. The book considers the possible impacts of technology on the practices of these professions.
  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems. Institute of Electrical and Electronics Engineers. https://standards.ieee.org/industry-connections/ec/ead-v1.html
    • This document aims to establish societal and policy guidelines for autonomous and intelligent systems to promote ethical and human-centric development. It provides a reference of pragmatic and directional recommendations for technologists, educators, and policymakers. The discussion includes scientific analysis, description of resources and tools, conceptual principles, and actionable advice. Specific guidance is outlined for standards, certification, regulation, legislation, design, manufacture, and use of these systems in professional organizations.
  • West, S. M., et al. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html
    • This paper shows there is a diversity crisis in the AI sector across gender and race. Thus, the authors argue that the AI industry must acknowledge the gravity of its diversity problem and admit that existing methods have failed to contend with the uneven distribution of power, and AI can reinforce such inequality. 
  • Wilkins, D. B. (1998). Identities and roles: Race, recognition, and professional responsibility. Maryland Law Review, 57(4), 1502-1594.
    • This article argues that issues relating to a lawyer’s non-professional identity for example, gender, race, religion are omitted as motivation for lawyers to uphold the profession’s norms. The article also discusses narratives created in the law profession about the nature of the lawyer’s role, particularly the claim that a lawyer’s non-professional identity is (or at least ought to be) irrelevant to their professional role.

III. Concepts & Issues

Chapter 8. We’re Missing a Moral Framework of Justice in Artificial Intelligence: On the Limits, Failings, and Ethics of Fairness (Matthew Le Bui and Safiya Umoja Noble)⬆︎

  • Abdalla, M., & Abdalla, M. (2020). The grey hoodie project: Big Tobacco, Big Tech, and the threat on academic integrity. arxiv:2009.13676
    • In this paper, the authors compare the power of Big Tech to influence academic research to that of Big Tobacco. The authors argue that, much like Big Tobacco in the past, Big Tech increasingly funds academic research, to the point where a majority of members of computer science departments in four top universities have received some form of funding from major technology companies. The authors argue that this may have implications for academic freedom and the continued development of ethical AI systems.
  • Benjamin, R. (2019).* Race after technology: Abolitionist tools for the New Jim Code. Polity. https://www.ruhabenjamin.com/race-after-technology
    • Using critical race theory, this book analyzes how current technologies can and have reinforced White supremacy and increased social inequalities. The concept of “The New Jim Code” is introduced as a means of describing how a wide range of discriminatory designs can: 1. encode inequity by amplifying racial hierarchies, 2. ignoring and replicating social divisions, and 3. inadvertently reinforcing racial biases while intending to ‘fix’ them. This book concludes with an overview of conceptual strategies, including tech activism and abolitionists tools, that might be used to disrupt and rectify current and future technological design.  
  • Benjamin, R. (2016). Catching our breath: Critical race STS and the carceral imagination. Engaging Science, Technology, and Society, 2, 145-156. https://doi.org/10.17351/ests2016.70  
    • This article brings together science and technology studies (STS) scholarship with critical race theory to examine carceral approaches to governing human life. The author argues for an expanded understanding of ‘the carceral’ which includes diverse forms of containment in health and medicine, education and employment, border policies, data practices, and virtual reality. The article concludes with a call for the adoption of abolitionist strategies to foster human agency in relation to science, technology, and innovation.  
  •  Binns, R. (2018).* Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 149-159). PMLR. http://proceedings.mlr.press/v81/binns18a.html
    • This article discusses contemporary issues of fairness and ethics in machine learning and artificial intelligence, arguing that these disciplines have been increasingly formalized around Enlightenment-era philosophies concerning discrimination, egalitarianism, and justice as parts of moral and political philosophy. The author concludes that the historical study of such frameworks can illuminate contemporary framings and assumptions. 
  • Birhane, A., & Van Dijk, J. (2020). Robot rights? Let’s talk about human welfare instead. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 207–213). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375855
    • This paper presents a review of current literature advancing the argument for robot rights. The authors turn away from the question of whether robots should be conferred or denied rights and instead focus on whether robots can have rights in the first place. The authors argue that robots are artifacts emerging from human mediation and, therefore, their rights should be considered in the context of power relations in global societies. They further argue that there are more pressing ethical and social issues relating to new machines, and the debate for robot rights draws necessary attention away from these important discussions.
  • Browne, S. (2015). Dark matters: On the surveillance of blackness. Duke University Press. https://www.dukeupress.edu/dark-matters
    • This book investigates surveillance practices through the conditions of blackness, showing how contemporary surveillance technologies are informed by historical racial formations, such as the policing of black lives through slavery, branding, runaway slave notices, and lantern laws. The author draws from black feminist theory, sociology, and cultural studies, to describe surveillance as a normalized material and discursive practice that reifies boundaries, bodies, and borders, using racial lines. 
  • Bucher, T. (2018). If… Then: Algorithmic power and politics. Oxford University Press. http://dx.doi.org/10.1093/oso/9780190493028.001.0001
    • This book investigates the political economy of algorithms and other recently developed informational infrastructures, such as search engines and social media. Arguing that we ‘live algorithmic lives,’ the author describes how society is shaped by the political and commercial institutions who design technology. Using case-studies to explore the materially discursive and cultural dimensions of software, the book argues that the most important aspects of algorithms are not in the details, but rather how they are used to define social and political practices. 
  • Calo, R. (2017). Artificial intelligence policy: A roadmap. SSRN Electronic Journalhttps://doi.org/10.2139/ssrn.3015350
    • The author of this paper provides a literature review of recent ethical principles for artificial intelligence. However, Calo argues that these principles should be supplemented by state regulation of AI technologies. The author lays out five critical principles for AI policy that include justice and equity through inclusivity, transparent enforcement, certifications, guaranteeing the respect of privacy for those involved in the creation and development of AI, and taxation for the redistribution of wealth created by AI.
  • Chun, W. H. K. (2008).* Control and freedom power and paranoia in the age of fiber optics. MIT Press. https://mitpress.mit.edu/books/control-and-freedom
    • This book uses media archeology and visual culture studies to study the current political and technological coupling of freedom and control, by tracing the emergence of the Internet as a mass medium of communication. Deleuze and Foucault are used to ground the analysis of contemporary technologies such as webcams and facial recognition software. The author argues that the relationship between control and power on the Internet is a network, driven by sexuality and race, tracing the origins of governmental regulation online to cyberporn, and concluding that the Internet’s potential for democracy is found in the mutual exposure to others we cannot control. 
  • Clark, J., & Hadfield, G. K. (2019). Regulatory markets for AI safety. arXiv:2001.00078
  • In this policy recommendation, Clark & Hadfield provide a review of different regulatory frameworks for AI. The authors argue that policymakers have had a slow and challenging job regulating this market because of corporate influence and a lack of technical expertise. The authors propose regulatory markets as an alternative, where third-party independent regulators will audit companies according to a set of principles set by governments and set certifications for corporations.
  • Daniels, J. (2009).* Cyber racism: White supremacy online and the new attack on civil rights. Rowman & Littlefield Publishers.
    • This book explores white supremacy on the Internet, tracing its origins from print to the online era. The author describes ‘open’ and ‘cloaked’ sites in which white supremacist organizations have translated their publications online, interviewing small groups of teenagers as they navigate and attempt to comprehend the content. The author provides an discussion of cyber racism which addresses common assumptions about the inherent democratic nature of the Internet, and its capacity as a recruitment tool for white supremacist groups. The book concludes with an analysis challenging convention about racial equity, civil rights, and the Internet. 
  • Daniels, J., et al. (2019). Advancing racial literacy in tech. Data & Society. https://datasociety.net/library/advancing-racial-literacy-in-tech/ 
    • In response to growing concerns about a lack of diversity training in the tech industry, this paper presents an overview of racial literacy practices designed for adoption by organizations. The authors discuss the role that tech products, company culture, and supply chain practices play in perpetuating structural racism, as well as strategies for capacity building grounded in intellectual understanding, emotional intelligence, and action. 
  • Eubanks, V. (2018).* Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. https://virginia-eubanks.com/books/
    • Considering the historic context of austerity, this book documents the use of digital technologies for distributional decision-making in social service delivery to poor and disadvantaged populations in the United States. Using ethnographic and interview methods, the author investigates the impact of automated systems such as Medicaid and Temporary Assistance for Needy Families, and electronic benefit transfer cards, stating that such systems, while expensive, are often less effective, and regularly reproduce and aggravate bias, equity disparities, and state surveillance of the poor. The author speaks to legacy system prejudice and the ‘social specs’ that underlie our decision-systems and data-sifting algorithms and offers a number of participatory design solutions including empathy through co-design, transparency, access, and control of information. 
  • Floridi, L., et al. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26, 1771-1796. https://doi.org/10.1007/s11948-020-00213-5
    • In this paper, the authors discuss seven essential factors for what they call “AI for Social Good” or “AI4SG.” These factors are: (1) the falsifiability and incremental deployment of algorithms, (2) creating safeguards against their manipulation, (3) respect for the autonomy of users, (4) transparency and explainability, (5) consent and privacy protections, (6) fairness, and (7) providing users with the capacity of making sense of what they are interacting with. 
  • Gandy, O. H. (1993).* The panoptic sort: A political economy of personal information. Westview Press. https://doi.org/10.1002/9781444395402.ch20   
    • In this book the author describes the political economy of personal information (PI), documenting the various ways in which PI is classified, sorted, stored, and capitalized upon by institutions of power. The author discusses personal privacy in the context of individual autonomy, collective agency, and bureaucratic control, describing these operations as panoptical sorting processes.
  • Greene, D., et al. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences. https://scholarspace.manoa.hawaii.edu/bitstream/10125/59651/0211.pdf   
    • This paper uses frame analysis to analyze recent high-profile value statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). The authors conclude that vision statements for ethical AI/ML, in their adoption of specific language drawn from critics of the field, have become limited, expert-driven, and technologically deterministic.
  • Hoffmann, A. L. (2019).* Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900-915. https://doi:10.1080/1369118x.2019.1573912    
    • This article critiques fairness and antidiscrimination efforts in AI, discussing how technical attempts to isolate and remove ‘bad’ data and algorithms tend to overemphasize ‘bad actors’ and ignore intersectional or broader socio technical contributions. The author describes how this leads to reactionary technical solutions that fail to displace the underlying logic that produces unjust hierarchies, thus failing to address justice concerns. 
  • Hoffmann, A. L. (2017). Data, technology, and gender: Thinking about (and from) trans lives. Spaces for the Future. Routledge. https://doi.org/10.4324/9780203735657-1
    • This book chapter discusses how data practices have situated and defined gender, with a particular focus on transgendered identity and online discrimination perpetuated by harmful design. The author describes how data-driven platforms are used by many transgendered activists to bring attention to the concerns of minority populations, however these platforms have also been used to promote sexism and gender inequality.  
  • Krafft, P. M., et al. (2020). Defining AI in policy versus practice. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 72–78). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375835
    • This research paper focuses on the many different definitions of artificial intelligence in the policy realm. The authors argue that definitional ambiguity in AI prevents effective regulation since law and policy require consensus around practical coordination definitions. The paper presents a review of policy reports and interviews with AI practitioners about their definitions of artificial intelligence and adjacent subjects. The authors find that AI practitioners are concerned about the technology’s functionalities, while policymakers are concerned with their future applications. They conclude that this latter approach may overlook essential issues related to AI’s present conditions and its current impacts on society.
  • Lewis, T., et al. (2018).* Digital defense playbook: Community power tools for reclaiming data. Our Data Bodies.
    • Our Data Bodies is a collaborative project that combines community-based organizing, capacity-building, and academic research focused on how marginalized communities are impacted by data-based technologies. This workbook presents research findings concerning data, surveillance, and community safety, and includes education activities using co-creation methods and tools towards data justice and data access for equity. 
  • McIlwain, C. (2017). Racial formation, inequality and the political economy of web traffic. Information, Communication & Society, 20(7), 1073–1089. https://doi.org/10.1080/1369118X.2016.1206137 
    • Using racial formation theory, this article reviews how race is represented and systematically reproduced on the Internet. The author uses an original dataset and network graph to document the architecture of web traffic, including traffic patterns among and between race-based websites. The study finds that web producers create hyperlinked networks that guide users to websites without consideration of racial or nonracial content, indicating the presence of race-based hierarchies of weighted values, influence and power. 
  • Mills, C. W. (2017).* Black rights/white wrongs: The critique of racial liberalism. Oxford University Press. 
    • This book of essays focuses on racial liberalism from a historical perspective, reconceptualizing justice and fairness in the ways in which they reimagine social structures, without being limited to individualistic moral virtuosity. The author remarks on the centrality of the exclusion of liberalism in many documents and declarations and supplants liberalism’s classical individualistic social ontology for one that includes class, gender, and race. 
  • Noble, S. U. (2018).* Algorithms of oppression: How search engines reinforce racism. New York University Press. https://nyupress.org/9781479837243/algorithms-of-oppression/
    • This book discusses how search engines, such as Google, are embedded with racial and sexist bias, challenging the notion that they are neutral algorithms acting outside of influence from their human engineers, and emphasizing the greater social impacts created through their design. Through an analysis of text and media searches, and research on paid advertising, the author argues that the monopoly status of a small group of companies alongside vested private interests in promoting some sites over others, has led to biased search algorithms that privilege whiteness and exhibit bias against people of colour, particularly women.
  • Pasquale, F. (2016).* The black box society: The secret algorithms behind money and information. Harvard University Press. 
    • This book explores the social and economic impacts of developing information practices, namely the influx of ‘big data’. The author discusses how these practices have both benefited society through innovations in health care, while also causing significant disruptions to social equity, e.g. the subprime mortgage crisis of 2009. The author attributes these negative impacts to improper use of algorithms and concludes the book with several recommendations for how they might be corrected. 
  • Posada, J. (2020). The future of work is here: Toward a comprehensive approach to artificial intelligence and labour. C4eJournal: Perspectives on Ethics, The Future of Work in the Age of Automation and AI Symposium. [2020 C4eJ 56] [20 eAIj 16].
    • This commentary presents a literature review of different modes of work that shape AI algorithms. It argues that, while the development of ethical principles guiding the used of this technology is essential, such principles would not translate to enforcement mechanisms even if they consider workers. The commentary argues that current human rights frameworks already consider these types of work better than recent AI ethics principles.
  • Schiff, D., et al. (2020). What’s next for AI ethics, policy, and governance? A global overview. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 153–158). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375804
    • This paper presents three topics of importance found in a review of eighty AI ethics documents from private companies, NGOs, and the public sector. The authors observed that these documents are driven by a motivation to gain a competitive advantage, used for strategic planning and intervention, and signal social responsibility and leadership. In assessing these documents, the authors argue that the most successful ones that engage with law and governance are specific, and enforceable, and intend to be amended and updated.
  • Vaidhyanathan, S. (2018).* Antisocial media: How Facebook disconnects us and undermines democracy. Oxford University Press.
    • This book focuses on the rise and socio-political impacts of the contemporary social media platform Facebook. The author discusses the consequences of Facebook’s dominance, including the ways in which user behaviour is tracked and shaped through the platform’s multifaceted operations, addressing how these practices have impacted global democratic processes such as national elections. 
  • Zook, M., et al. Ten simple rules for responsible big data research. PLOS Computational Biology, 13(3). https://doi.org/10.1371/journal.pcbi.1005399
    • Acknowledging the growing size and availability of big data to researchers, the authors of this paper stress the importance of adopting ethical principles when working with large datasets, particularly as research agendas more beyond typical computational and natural sciences to include those involving human behaviour, interaction and health. The paper outlines ten basic principles which focus on recognizing the human participants and complex systems contained within the datasets, making ethical questioning a part of standard workflow.

Chapter 9. Accountability in Computer Systems (Joshua A. Kroll)⬆︎

  • Andrews, L. (2019). Public administration, public leadership and the construction of public value in the age of the algorithm and ‘big data.’ Public Administration97(2), 296-310. https://doi.org/10.1098/rsta.2018.0080
    • This paper is an introduction to a special issue of the journal titled: “Governing artificial intelligence: ethical, legal and technical opportunities and challenges.” It outlines recent developments in AI governance, and examines how the ethical frameworks are set, and provides suggestions to further the discourse on AI policy. 
  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671. http://dx.doi.org/10.15779/Z38BG31
    • Barocas & Selbst argue that algorithmic techniques such as data mining are only as effective as the data imported to the system, and that blind reliance on these systems may perpetuate discrimination. Further, these biases are not intentionally incorporated into the machine, making the source of discrimination difficult to present to a court. The article examines these concerns in light of American anti-discrimination law. 
  • Breaux, T. D., et al. (2006).* Towards regulatory compliance: Extracting rights and obligations to align requirements with regulations. In 14th IEEE International Requirements Engineering Conference (RE’06) (pp. 49-58). IEEE.
    • This article argues that current regulations that prescribe stakeholder rights and obligations that must be satisfied by software systems are inadequate because they are extremely ambiguous. Fields such as healthcare that are typically highly regulated require a more sophisticated system. The article presents a model for extracting and prioritizing rights and obligations and applies it to the U.S. Health Insurance Portability and Accountability Act. 
  • Desai, D. R., & Kroll, J. A. (2017).* Trust but verify: A guide to algorithms and the law. Harvard Journal of Law & Technology31(1), pp. 1-64.
    • This article examines the problem of the potential for algorithms to be designed to create outcomes that are incompatible with what society prohibits and remain undetectable because of the complexity of their design. The authors challenge the common solution proposed for this problem, algorithmic transparency, arguing that calls for transparency are not compatible with computer science. Instead, the article presents an alternative to transparency by providing recommendations on regulation of public and private sector use of software. 
  • Du, M., Liu, N., & Hu, X. (2019). Techniques for interpretable machine learning. Communications of the ACM63(1), 68-77. http://dx.doi.org/10.1145/3359786
    • This report argues that concerns about the black box nature of algorithmic systems have limited their use in society. It provides key insights in interpretability and argues that interpretable machine learning will solve the problem of limited application. 
  • Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review, 16, 18-84. https://doi.org/10.31228/osf.io/97upg
    • This article argues that the right to an explanation, as present in the EU General Data Protection Regulation, is unlikely to remedy problems of unfairness in machine learning algorithms. The article proposes that a solution to algorithmic bias might be found in other parts of the GDPR, such as the right to erasure. 
  • Ehsan, U., et al. (2019). Automated rationale generation: A technique for explainable AI and its effects on human perceptions. In W.-T. Fu & S. Pan (Eds.), Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 263-274). Association for Computing Machinery. https://doi.org/10.1145/3301275.3302316
    • This paper proposes generating real-time explanations of the behavior of autonomous agents by employing a computational model that learns to translate an autonomous agent’s internal state and action data representations into natural language. Using the case study of an agent playing a video game, the paper examines different types of explanations and the corresponding user perceptions.
  • Feigenbaum, J., et al. (2012).* Systematizing “accountability” in computer science. Technical Report YALEU/DCS/TR-1452, Yale University.
    • This report provides a systematization of approaches to accountability that have been taken in computer science research. The report categorizes these approaches along the axes of time, information, and action; within each of these axes, and identifies multiple questions of interest. The report’s systematization contributes an articulation of the definitions that have been used in computer science (sometimes only implicitly); it also contributes a perspective on how these different approaches are related.
  • Hong, S. R., et al. (2020). Human factors in model interpretability: Industry practices, challenges, and needs. In Proceedings of the ACM on Human-Computer Interaction4, 1-26. https://doi.org/10.1145/3392878
    • This paper presents the findings from 22 semi-structured interviews with machine learning practitioners focusing on how they conceive of, and design for, interpretability in the models they develop and deploy. The findings suggest that model interpretability frequently involves cooperation and mental model comparison between people in different roles, as well as building trust between people and models and between different people within an organization.
  • Kaur, H., et al. (2020). Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. In R. Bernhaupt, F. Mueller, & D. Verweij (Eds.), Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-14). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376219
    • This paper uses a contextual inquiry and survey to study how data scientists use interpretability tools to uncover issues that arise when building and evaluating machine learning models in practice. The results suggest that data scientists over-trust and misuse interpretability tools. Few study participants were able to accurately describe the output of interpretability tools
  • Kroll, J. A. (2021). Outlining traceability: A principle for operationalizing accountability in computing systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 758-771). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445937
    • This paper aims to reframe the discourse on accountability and transparency by proposing a new principle: traceability. Traceability entails establishing not only how a system works but how it was created and for what purpose. The paper shows how traceability explains why a system has particular dynamics or behaviors and examines how the principle has been articulated in existing AI principles and policy statements.
  • Kroll, J. A., et al. (2016).* Accountable algorithms. University of Pennsylvania Law Review165(3), 633-706.
    • This article challenges the dominant position in legal literature that transparency will solve the problems of incorrect, unjustified or unfair results of algorithmic decision-making. The article argues that technology is creating new opportunities, subtler and more and more flexible than total transparency, to design algorithms so that they better align with legal and policy objectives.
  • Kroll, J. A. (2018).* The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2133). https://doi.org/10.1098/rsta.2018.0084
    • This paper argues that, contrary to the criticism that mysterious, unaccountable black-box software systems threaten to make the logic of critical decisions inscrutable, algorithms are fundamentally understandable pieces of technology. The paper investigates the contours of inscrutability and opacity, the way they arise from power dynamics surrounding software systems, and the value of proposed remedies from disparate disciplines, especially computer ethics and privacy by design. It concludes that policy should not accede to the idea that some systems are of necessity inscrutable. 
  • Lakkaraju, H., & Bastani, O. (2020). “How do I fool you?” Manipulating user trust via misleading black box explanations. In A. Markham, J. Powles, T. Walsh, & A. Washington (Eds.), Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 79-85). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375833
    • This paper explores how explanations of black box machine learning models can mislead users. To this end, the paper proposes a theoretical framework for understanding when misleading explanations can exist, demonstrates an approach for generating potentially misleading explanations, and conducts a user study with experts from law and criminal justice to understand how misleading explanations impact user trust.
  • Miller, T. (2019).* Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence267, 1-38.
    • This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings and discusses ways that these can be infused with work on explainable artificial intelligence.
  • Mittelstadt, B., et al. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency (pp. 279-288).
    • This article analyzes the increased focus on building simplified models that help to explain how artificial intelligence machines make decisions. It then compares how models and their explanations are distinguished in the fields of sociology and philosophy. Finally, they argue that the creation of models may not be necessary, and instead, a broader approach could be utilized. 
  • Molnar, C. (2019). Interpretable machine learning. Leanpub. 
    • This book provides a guide for making black box models explainable to the average person. It provides an overview of the concept of interpretability and outlines simple interpretable models. Then, the book discusses methods for interpreting black box models.  
  • Nissenbaum, H. (1996).* Accountability in a computerized society. Science and Engineering Ethics2(1), 25-42.
    • This essay warns of eroding accountability in computerized societies and argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, the article identifies four barriers in particular: 1) the problem of many hands, 2) the problem of bugs, 3) blaming the computer, and 4) software ownership without liability. The paper concludes with ideas on how to reverse this trend.
  • Pasquale, F. (2019). The second wave of algorithmic accountability. Law and Political Economy Project. https://lpeproject.org/blog/the-second-wave-of-algorithmic-accountability/
    • This article describes two distinct waves in algorithmic accountability discourse. The first wave involves accountability research and activism that target existing systems, such as demonstrating that facial recognition tools contain racial biases. The second wave aims to address more structural concerns and query whether certain systems, especially those that have harmful social and economic consequences, should be used at all.
  • Pearson, S. (2011). Toward accountability in the cloud. IEEE Internet Computing15(4), 64-69.
    • This article suggests that accountability will become a central concept in the cloud, and in new mechanisms meant to increase trust in cloud computing. The article then argues that a contextual approach must be applied, and a one-size fits all system avoided. 
  • Reisman, D., et al. (2018).* Algorithmic impact assessments: A practical framework for public agency accountability. AI Now Institute.
    • This report proposes an Algorithmic Impact Assessment (AIA) framework designed to support affected communities and stakeholders as they seek to assess the claims made about these systems and determine where and if their use is acceptable. The report outlines the 5 key elements of the framework and argues that implementing this framework will help public agencies achieve 4 key policy goals. 
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215. https://doi.org/10.1038/s42256-019-0048-x
    • This article contends that the current trend of attempting to explain the behavior and decisions of black box, meaning opaque, machine learning models is deeply flawed and potentially harmful. The article supports this contention by drawing on examples from healthcare, criminal justice, and computer vision, and proceeds to offer an alternative approach: building models that are not opaque, but inherently interpretable.
  • Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society4(2). https://doi.org/10.1177%2F2053951717736335
    • This paper argues that just as a conception of justice is needed for rule of law, so too is the need for the establishment of data justice. Data justice would require fairness in the way people are represented as a result of digital data production. Taylor proposes three pillars of international data justice.
  • Wachter, S., & Mittelstadt, B. (2019).* A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 494.
    • This article argues that Big Data analytics and artificial intelligence tend to make non-intuitive and unverifiable inferences about individual people. Big Data and AI rely on data of questionable value, which creates new opportunities for discrimination. The legal status of these decisions is also contended. Wachter and Mittelstadt propose a new legal right to address this problem, a data protection right to reasonable inferences. 
  • Weitzner, D. J., et al. (2007).* Information accountability. Technical Report MIT-CSAIL-TR-2007-034, MIT.
    • This paper argues that debates over online privacy, copyright, and information policy questions have been overly dominated by the access restriction perspective. The paper proposes an alternative to the “hide it or lose it” approach that currently characterizes policy compliance on the Web. The alternative proposed is to design systems that are oriented toward information accountability and appropriate use, rather than information security and access restriction.
  • Zhou, Y., & Danks, D. (2020). Different “intelligibility” for different folks. In A. Markham, J. Powles, T. Walsh, & A. Washington (Eds.), Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 194-199). https://doi.org/10.1145/3375627.3375810
    • This paper argues that model intelligibility (often called interpretability or explainability) is neither a one-size-fits-all nor an intrinsic property of a system; instead, it depends on individuals’ characteristics, preferences, and needs. The paper proposes a taxonomy of different types of intelligibility, each of which requires the provision of different types of information to users.

Chapter 10. Transparency (Nicholas Diakopoulos)⬆︎

  • Alloa, E. (2018). Transparency: A magic concept of morality. In E. Alloa, & D. Thomä (Eds), Transparency, Society and Subjectivity: Critical Perspectives, (pp. 31–32). Palgrave Macmillan.
    • This book critically engages with the idea of transparency whose ubiquitous demand stands in stark contrast to its lack of conceptual clarity. The book carefully examines this notion in its own right, traces its emergence in Early Modernity and analyzes its omnipresence in contemporary rhetoric. 
  • Ananny, M. (2016).* Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, & Human Values41(1), 93-117.
    • This paper develops a definition of networked information algorithms (NIAs) as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, sem-autonomous action. Starting from Merrill’s prompt to see ethics as the study of “what we ought to do,” the paper examines ethical dimensions of contemporary NIAs. Specifically, the paper develops an empirically grounded, pragmatic ethics of algorithms, through tracing an algorithmic assemblage’s power to convene constituents, suggest actions based on perceived similarity and probability, and govern the timing and timeframes of ethical action.
  • Ananny, M., & Crawford, K. (2018).* Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society20(3), 973-989.
    • This article critically interrogates the ideal of transparency, tracing some of its roots in scientific and sociotechnical epistemological cultures and presents 10 limitations to its application. The article argues that transparency is inadequate for understanding and governing algorithmic systems and sketches an alternative typology of algorithmic accountability grounded in constructive engagements with the limitations of transparency ideals.
  • Blacklaws, C. (2018). Algorithms: Transparency and accountability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2128). https://doi.org/10.1098/rsta.2017.0351
    • This opinion piece explores the issues of accountability and transparency in relation to the growing use of machine learning algorithms. Citing the recent work of the Royal Society and the British Academy, it looks at the legal protections for individuals afforded by the EU General Data Protection Regulation and asks whether the legal system will be able to adapt to rapid technological change. It concludes by calling for continuing debate that is itself accountable, transparent and public.
  • Brkan, M. (2019). Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond. International Journal of Law and Information Technology27(2), 91-121.
    • The purpose of this article is to analyze the rules of the General Data Protection Regulation (GDPR) and the Directive on Data Protection in Criminal Matters on automated decision-making and to explore how to ensure transparency of such decisions, in particular those taken with the help of algorithms. While the Directive on Data Protection in Criminal Matters does not seem to give the data subject the possibility to familiarize herself with the reasons for such a decision, the GDPR obliges the controller to provide the data subject with ‘meaningful information about the logic involved’ (Articles 13(2)(f), 14(2)(g) and 15(1)(h)), thus raising the much-debated question whether the data subject should be granted a ‘right to explanation’ of the automated decision. This article goes beyond the semantic question of whether this right should be designated as the ‘right to explanation’ and argues that the GDPR obliges the controller to inform the data subject of the reasons why an automated decision was taken. 
  • Cath, C. (2018).* Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2133). https://doi.org/10.1098/rsta.2018.0080 
    • This paper is the introduction to the special issue entitled “Governing artificial intelligence: ethical, legal and technical opportunities and challenges.” The issue addresses how AI can be designed and governed to be accountable, fair and transparent. Eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems.
  • Citron, D. K., & Pasquale, F. (2014).* The scored society: Due process for automated predictions. Washington Law Review89(1), 1-35.
    • This paper argues that procedural regularity is essential for those stigmatized by artificially intelligent scoring systems and that the American due process tradition should inform basic safeguards in this regard. It argues that regulators should be able to test scoring systems to ensure their fairness and accuracy and that individuals should be given meaningful opportunities to challenge adverse decisions based on scoring systems. 
  • Coglianese C., & Lehr D. (2019).  Transparency and algorithmic governance. Administrative Law Review, 71(1), 1–56.
    • This paper argues that the black-box nature of some machine learning algorithms does not pose a legal hardship for their use by government authorities. Legal standards of transparency are weaker than what may be expected by users. Additionally, there is an important distinction to be made between predictions which are determinative of final actions and those that are not. Most applications of machine learning by government authorities are not determinative in the sense that they help inform decisions, but do not dictate the final outcome. This supporting role minimizes the risk of harm.
  • D’Amour, A., et al. (2020). Fairness is not static: Deeper understanding of long term fairness via simulation studies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 525–534). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372878
    • This paper highlights the shortfalls of typical approaches to ensuring algorithmic fairness across different populations. The main result provides evidence that policies which may initially achieve fairness in a short-term static setting fail to do so in the long-term. The authors design a new software package used to simulate dynamic interactions between a machine learning model’s predictions and the populations its decisions affect. The overall message is that the real-world implementation of machine learning models is vastly different from typical static supervised learning settings, in which their performance is often evaluated for the sake of convenience, and this discrepancy must be addressed to avoid unintended consequences. 
  • de Fine Licht, J. (2014). Magic wand or Pandora’s Box? How transparency in decision making affects public perceptions of legitimacy. University of Gothenburg.
    • This dissertation identifies four main mechanisms that might explain positive effects of transparency on public acceptance and trust: that transparency enhances policy decisions, which indirectly makes people more trusting; that transparency is generally perceived to be fairer than secrecy; that transparency increases public understanding of decisions and decision makers; and that transparency increases the public feelings of accountability. The dissertation builds on five scenario-based experiments, with each study manipulating different degrees and versions of transparency for individual policy level decisions. The dissertation concludes that transparency might have the power to increase public perceptions of legitimacy, but also that the effect is more complex than often presumed. 
  • de Fine Licht, K., & de Fine Licht, J. (2020). Artificial intelligence, transparency, and public decision-making. AI & Society. https://doi.org/10.1007/s00146-020-00960-w
    • This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. The paper argues that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.
  • De Laat, P. B. (2018). Algorithmic decision-making based on machine learning from Big Data: Can transparency restore accountability? Philosophy & Technology, 31(4), 525-541. https://doi.org/10.1007/s13347-017-0293-z
    • The author of this paper takes a comprehensive approach to understanding both the limitations of transparency, and reasons for its potential impracticality. Full transparency implies exposing sensitive data and creating a potential route for users to exploit the system; for example, loan applicants modifying their features to achieve more favorable credit ratings. The author argues that there is a tradeoff between accuracy and interpretability, and that reasonable decreases in accuracy are justified when achieving interpretability. The paper concludes that only oversight bodies should have access to full algorithmic transparency in order to avoid privacy concerns and to protect competition in the private sector.
  • Diakopoulos, N., & Koliska, M. (2017).* Algorithmic transparency in the news media. Digital Journalism5(7), 809-828.
    • This research presents a focus group study that engaged 50 participants across the news media and academia to discuss case studies of algorithms in news production and elucidate factors that are amenable to disclosure. The results indicate numerous opportunities to disclose information about an algorithmic system across layers such as the data, model, inference, and interface. The authors argue that the findings underscore the deeply entwined roles of human actors in such systems as well as challenges to adoption of algorithmic transparency including the dearth of incentives for organizations and the concern for overwhelming end-users with a surfeit of transparency information.
  • Diakopoulos, N. (2015).* Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism3(3), 398-415.
    • This paper studies the notion of algorithmic accountability reporting as a mechanism for elucidating and articulating the power structures, biases, and influences that computational artifacts exercise in society. The paper proffers a framework for algorithmic power based on autonomous decision-making and motivates specific questions about algorithmic influence. The article analyzes five cases of algorithmic accountability reporting involving the use of reverse engineering methods in journalism to provide insight into the method and its application in a journalism context. 
  • Eslami, M., et al. (2019). User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–14). Association for Computing Machinery.
    • This paper focuses on a specific case study in which the algorithmic opacity of the Yelp review filtering mechanism was revealed to users writing reviews. Reactions were split into two groups of thought: challengers and defenders. The number of users questioning the existence and operation of the algorithm outnumbered those who defended it. Furthermore, the defense of the algorithm is explained by the level of user engagement and the impact the algorithm has on the user’s life. As users were made aware of the algorithm’s existence and its inner workings, some wanted to leave the platform altogether due to perceived deception.
  • Fenster, M. (2015). Transparency in search of a theory. European Journal of Social Theory18(2), 150-167.
    • This article argues that transparency is best understood as a theory of communication that excessively simplifies and thus is blind to the complexities of the contemporary state, government information, and the public. Taking them fully into account, the article argues, should lead us to question the state’s ability to control information, which in turn should make us question not only the improbability of the state making itself visible, but also the improbability of the state keeping itself secret.
  • Flyverbom, M. (2019). The Digital Prism. Cambridge University Press.
    • This book shows how the management of our digital footprints, visibilities and attention is a central force in the digital transformation of societies and politics. Seen through the prism of digital technologies and data, the lives of people and workings of organizations take new shapes in our understanding. In order to make sense of these, the book argues that we push beyond common ways of thinking about transparency and surveillance and look at how managing visibility is a central but overlooked phenomenon that influences how people live, how organizations work and how societies and politics operate. 
  • Fox, J. (2007).* The uncertain relationship between transparency and accountability. Development in Practice17(4-5), 663-671.
    • This article questions the widely held assumption that transparency is supposed to generate accountability. It argues that transparency mobilizes the power of shame, yet the shameless may not be vulnerable to public exposure; truth often fails to lead to justice. After exploring different definitions and dimensions of the two ideas, the article instead focuses on the question of what kinds of transparency lead to what kinds of accountability, and under what conditions? It concludes by proposing that the concept can be unpacked in terms of two distinct variants; transparency can be either ‘clear’ or ‘opaque’, while accountability can be either ‘soft’ or ‘hard’.
  • Fung, A., et al. (2007).* Full disclosure: The perils and promise of transparency. Cambridge University Press.
    • Based on a comparative analysis of eighteen major targeted transparency policies, the authors suggest that transparency policies often produce information that is incomplete, incomprehensible, or irrelevant to the consumers, investors, workers, and community residents who could benefit from them. The authors present that transparency sometimes fails because those who are threatened by it form political coalitions to limit or distort information. The authors argue that to be successful, transparency policies must place the needs of ordinary citizens at center stage and produce information that informs their everyday choices.
  • Garfinkel, S., et al. (2017). Toward algorithmic transparency and accountability. Communications of the ACM, 60(9), 5. https://doi.org/10.1145/3125780
    • This letter lays out seven principles for ensuring fairness in an evolving ecosystem where decisions are increasingly outsourced to algorithms. It aims to enable both the self-regularization of organizations as well outside regulation by policy makers by setting a standard for deployed automated decision systems. This also serves as a guideline for engineers designing new systems to ensure they are explainable and auditable.
  • Hansen, H. (2015). Numerical operations, transparency illusions and the datafication of governance. European Journal of Social Theory, 18(2), 203–220.
    • This article analyzes the forms of transparency produced by the use of numbers in social life. It examines what it is about numbers that often makes their ‘truth claims’ so powerful, investigates the role that numerical operations play in the production of retrospective, real-time and anticipatory forms of transparency in contemporary politics and economic transactions, and discusses some of the implications resulting from the increasingly abstract and machine-driven use of numbers. It argues that the forms of transparency generated by machine-driven numerical operations open up for individual and collective practices in ways that are intimately linked to precautionary and pre-emptive aspirations and interventions characteristic of contemporary governance.
  • Hood, C. (2010). Accountability and transparency: Siamese twins, matching parts, awkward couple? West European Politics, 33, 989–1009.
    • This paper contrasts three possible ways of thinking about the relationship between accountability and transparency as principles of governance: as ‘Siamese twins’, not really distinguishable; as ‘matching parts’ that are separable but nevertheless complement one another smoothly to produce good governance; and as ‘awkward couple’, involving elements that are potentially or actually in tension with one another. It then identifies three possible ways in which we could establish the accuracy or plausibility of each of those three characterizations. 
  • Kizilcec, R. (2016). How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 2390–2395). Association for Computing Machinery. https://doi.org/10.1145/2858036.2858402
    • This work conducts a study to understand the relationship between user trust in an interface, and three different levels of transparency. For the specific task in question of peer assessment, trust in the system is reduced when a user’s received score was lower than their expectations. As the review process and score justifications are made more transparent, trust is recovered. There are, however, diminishing returns as too much justification resulted in lower trust. Lastly, user trust is unaffected when expectations are met, suggesting a confirmation bias and a need for transparency only when there is a discrepancy between expectations and reality.
  • Koene, A., et al. (2019). A governance framework for algorithmic accountability and transparency. European Parliamentary Research Service. https://doi.org/10.2861/59990
    • This report recognizes the role that algorithms play in enabling high-throughput and fast decisions, as well as their ability to process quantities of data that are beyond human comprehension. It also raises awareness that, in high-stakes settings such as the deployment of autonomous vehicles, auditing and accountability are crucial in limiting significant health and safety risks. To limit the concern that machine learning systems are designed without the consequence of prediction in mind, the authors review current literature and propose four policy options designed to comprehensively address the need for transparency.
  • Meijer, A., et al. (2014). Transparency. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford Handbook of Public Accountability. Oxford University Press.
    • This chapter opens up the “black box” of the relation between transparency and accountability by examining the expanding body of literature on government transparency. Three theoretical relations between transparency and accountability are identified: transparency facilitates horizontal accountability; transparency strengthens vertical accountability; and transparency reduces the need for accountability. Reviewing studies into the relation between transparency and accountability, this chapter argues that under certain conditions and in certain situations, transparency may contribute to accountability: transparency facilitates accountability when it actually presents a significant increase in the available information, when there are actors capable of processing the information, and when exposure has a direct or indirect impact on the government or public agency.
  • Mittelstadt, B. D., et al. (2016).* The ethics of algorithms: Mapping the debate. Big Data & Society3(2). https://doi.org/10.1177/2053951716679679
    • This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. Finally, it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
  • Springer, A., & Whittaker, S. (2020). Progressive disclosure: When, why, and how do users want algorithmic transparency information? ACM Transactions on Interactive Intelligent Systems, 10(4), 1–32. https://doi.org/10.1145/3374218
    • This article investigates the effects of making algorithmic decisions more transparent, to determine how users react to them. The authors demonstrate that complete transparency is not always beneficial, particularly when users are made aware of errors in a way that undermines their positive perception of the system’s accuracy. Additionally, the experiments demonstrate that user perceptions of a system that provides detailed feedback and one that does not can be quite different, even if the two systems are functionally the same. 
  • Turilli, M., & Floridi, L. (2009).* The ethics of information transparency. Ethics and Information Technology11(2), 105-112.
    • The paper argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles, offering a new definition of transparency in order to take into account the dynamics of information production and the differences between data and information. The paper further defines the concepts of “heterogeneous organization” and “autonomous computational artefact” in order to clarify the ethical implications of the technology used in implementing information transparency. It argues that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations in order to support their ethical standing.
  • Watson, H., & Nations, C. (2019). Addressing the growing need for algorithmic transparency. Communications of the Association for Information Systems, 45, 488–510. https://doi.org/10.17705/1CAIS.04526
    • This paper examines the privacy/convenience tradeoff that has occurred due to the collection of personal data used to train algorithms which make personalized recommendations. The authors differentiate between three types of recommendations based on their level of perceived user “creepiness” with recommendations such as movie suggestions being deemed helpful and social influencing users’ world view being deemed ethically wrong. The paper also references other important works which have shown that, although algorithms can streamline decision making, they can also increase inequality and even threaten democracy.
  • Webb, H., et al. (2019). It would be pretty immoral to choose a random algorithm: Opening up algorithmic interpretability and transparency. Journal of Information, Communication & Ethics in Society (Online), 17(2), 210–228. https://doi.org/10.1108/JICES-11-2018-0092
    • This study revolves around the task of matching students to preferred courses based on utility values they provide for each course. Algorithms were trained using different utility maximization criteria, and students were asked to choose a least and most preferred algorithm, as well as to provide an explanation for their choices. Two different variations of this experiment were run: one where the explanations given by the algorithm were just numerical summaries of the utilities attained by each algorithm, and another where additional written explanations of the optimization criteria used for each algorithm was also provided. There was no consensus among the study participants regarding the best and worst algorithms, and even between the two versions of the experiments, participants would sometimes change their answers even though nothing changed about the underlying algorithms.
  • Westbrook, L., et al. (2019). Real-time data-driven technologies: Transparency and fairness of automated decision-making processes governed by intricate algorithms. Contemporary Readings in Law and Social Justice11(1), 45-50.
    • This paper employs recent research results covering real-time data-driven technologies to perform an analysis and make estimates regarding % of Facebook users who say they think users have no/a little/a lot of control over the content that appears in their newsfeed and % of social media users who say it is acceptable for social media sites to use data about them and their online activities to recommend events in their area/recommend someone they might want to know/show them ads for products and services/show them messages from political campaigns (by age group). This research paper uses structural equation modeling to analyze the collected data.
  • Zerilli, J., et al. (2019). Transparency in algorithmic and human decision-making: Is there a double standard? Philosophy & Technology, 32, 661–683. https://doi.org/10.1007/s13347-018-0330-6
    • This paper reviews evidence demonstrating that much human decision-making is fraught with transparency problems, shows in what respects AI fares little worse or better, and argues that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The article asserts that demands of practical reason require the justification of action to be pitched at the level of practical reason, and decision tools that support or supplant practical reasoning should not be expected to aim higher than this. This paper casts this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argues that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form.

Chapter 11. Responsibility and Artificial Intelligence (Virginia Dignum)⬆︎

  • Ashrafian, H. (2015). Artificial intelligence and robot responsibilities: Innovating beyond rights. Science and Engineering Ethics21(2), 317-326. https://doi.org/10.1007/s11948-014-9541-0
    • This paper aims to examine AI rights beyond the context of commensurate responsibilities and duties, using philosophical perspectives. Comparisons to arguments surrounding the moral rights of animals, are made. AI rights are also analyzed in regard to legal principles. Ashrafian argues that core tenants of humanity should be promoted in the development of AI rights.
  • Boden, M., et al. (2017).* Principles of robotics: Regulating robots in the real world. Connection Science, 29(2), 124–129. https://doi.org/10.1080/09540091.2016.1271400
    • This article outlines a framework of five ethical principles and seven high level messages for responsible robotics.
  • Brożek, B., & Jakubiec, M. (2017). On the legal responsibility of autonomous machines. Artificial Intelligence and Law25(3), 293-304. https://doi.org/10.1007/s10506-017-9207-8
    • This article examines the question of whether autonomous machines can be seen as agents who have legal responsibility. The authors argue that although possible, these machines should not be granted the status of legal agents, at least at their current stage of development.
  • Chockler, H., & Halpern, J. Y. (2004). Responsibility and blame: A structural-model approach. Journal of Artificial Intelligence Research22(1), 93-115. https://www.aaai.org/Papers/JAIR/Vol22/JAIR-2204.pdf
    • This article argues for the extension of the definition of causality to include the notion of degree of responsibility. The authors outline the concept of degree of blame, which accounts of the epistemic state of a given agent in a causal chain. They argue that degree of responsibility can act as a rough indicator for degree of blame.
  • Cranefield, S., Oren, N., & Vasconcelos, W. W. (2018). Accountability for practical reasoning agents. In International Conference on Agreement Technologies (pp. 33-48). Springer. https://doi.org/10.1007/978-3-030-17294-7_3
    • This article begins by discussing the concept of “accountable autonomy” in light of the rise of practical reasoning AI, considering research from a range of fields including public policy, health, and management to clarify the term. The article moves on to provide a list of requirements for accountable autonomous agents and provides potential research questions that could result from these requirements. The authors conclude by proposing the formulation of responsibility as a new core feature of accountability. 
  • Dignum, V. (2017).* Responsible autonomy. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI’2017) (pp. 4698–4704). https://doi.org/10.24963/ijcai.2017/655
    • This article discusses leading ethical theories for ensuring ethical behavior by artificial intelligence systems and proposes alternatives to the traditional methods. Dignum argues that there must be methodologies employed to uncover values of both designers and stakeholders in order to create understanding and trust for AI systems.
  • Dignum, V. (2018).* Ethics in artificial intelligence: introduction to the special issue. Ethics and Information Technology, 20, 1–3. https://doi.org/10.1007/s10676-018-9450-z
    • This introduction provides an overview on the ethical impact of artificial intelligence, briefly summarizing the aims of the papers contained in the special issue.
  • Dignum, V. (2019). Responsible Artificial Intelligence: How to develop and use AI in a responsible way. Springer International Publishing.
    • Dignum considers the implications of AI’s rise in traditional social structures, including issues of integrity surroundings those who build and operate AI. Dignum also provides an overview of related work and further reading in the field of ethical issues in modern algorithmic systems.
  • Dodig-Crnkovic, G., & Persson, D. (2008). Sharing moral responsibility with robots: A pragmatic approach. In P. K. Holst & P. Funk (Eds.), Frontiers in Artificial Intelligence and Applications Volume 173. IOS Press Books. https://doi.org/10.3233/978-1-58603-867-0-165
    • This article outlines an approach to roboethics that argues for moral responsibility of AI as a pragmatic, social regulatory mechanism. Because individual artificial intelligences perform tasks differently, they can in some sense be responsible for outcomes. The authors argue that the development of this social regulatory mechanism requires ethical training for engineers as well as democratic debate on what is best for society.
  • Eisenhardt, K. M. (1989).* Agency theory: An assessment and review. The Academy of Management Review, 14(1), 57–74. http://www.jstor.org/stable/258191?origin=JSTOR-pdf
    • This paper provides a definition and analysis of agency theory. Eisenhardt makes two conclusions. First, that agency theory provides insight into information systems, outcome uncertainty, incentives, and risk. Second, that agency theory has empirical value, especially when used with complementary perspectives. Eisenhardt recommends that agency theory be used to combat problems stemming from cooperative structures.
  • Floridi, L. (2016).* Should we be afraid of AI? Aeon Essays.
    • This essay addresses concerns expressed by tech CEOs and consumers alike, that the development of super-intelligent AI could spell disaster for the human race. Current reality is much more trivial, with AI merely absorbing what is put in by humans. Floridi argues that we need to focus on concrete problems with AI, rather than sci-fi scenarios.
  • Floridi, L., & Sanders, J. (2004).* On the morality of artificial agents. Minds and machines, 14(3) 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
    • This article offers a definition of the term agent, and highlights the concerns and responsibilities attributed to different types of agents, particularly artificial agents. The authors conclude by arguing that there is room in computer ethics for the concept of a moral agent that lacks free will, mental states, and/or responsibility.
  • Floridi, L., et al. (2018).* AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689-707. https://doi.org/10.1007/s11023-018-9482-5
    • This article discusses the findings of AI4People, a study which aimed to lay the foundations for a Good AI society. The authors introduce core opportunities and drawbacks for AI society, laying out five ethical principles that should be considered in AI development. They also offer 20 recommendations for the assessment, development, and incentivizing the creation of good AI. 
  • Gotterbarn, D.W., et al. (2018).* ACM code of ethics: A guide for positive action. Communications of the ACM, 61(1), 121-128.
    • This article provides the first update on the Association for Computing Machinery’s code of ethics since 2003, incorporating feedback from email, focus groups, and workshops. This update is significant, as some principles from the 2003 version were removed entirely, and new principles added.
  • Leikas, J., et al. (2019). Ethical framework for designing autonomous intelligent systems. Journal of Open Innovation: Technology, Market, and Complexity, 5(1), 18. https://doi.org/10.3390/joitmc5010018
    • This article reviews existing ethical principles and analyzes them in terms of their application to artificial intelligence. It then presents an original ethical framework for AI design.
  • Pelea, C. I. (2019). The relationship between artificial intelligence, human communication and ethics. A futuristic perspective: Utopia or dystopia? Media Literacy and Academic Research2(1), 38-48.
    • This article examines the question of whether and to what extent our social parameters of communication will need to be re-drawn because of the rise of artificial intelligence. Pelea first discusses how humans and AI communicate on an individual level. Second, she investigates the collective social anxiety surrounding the rise of AI and the ethical dilemmas this creates. Pelea argues that it is vital that we undertake the challenge of creating a culture of social responsibility surrounding AI.
  • Russell, S. & Norvig, P. (2009).* Artificial intelligence: A modern approach. 3rd. edition. Pearson Education.
    • This textbook provides an introduction to the theory and practice of artificial intelligence that is comprehensive and up to date.
  • Stone, P., et al. (2016).* Report of the 2015-2016 Study Panel. Stanford University.
    • The 2014 launched One Hundred Year Study on Artificial Intelligence aims to provide a long-term investigation into AI and its effect on social groups and society at large. This is the first study to come out of the project and discusses ways to frame the project in light of recent advances in AI technology, specifically in the public sector.
  • Saariluoma, P., & Leikas, J. (2019). Ethics in designing intelligent systems. International Conference on Human Interaction and Emerging Technologies, 1018, 47-52. Springer. https://doi.org/10.1007/978-3-030-25629-6_8
    • Hume’s guillotine, which argues that one can never derive values from facts, suggests that artificial intelligence systems can never be ethical, as they operate based on facts. The authors argue that Hume’s distinction between facts and values is not well founded, as ethical systems are composed of rules meant to guide actions, which act as a combination of both facts and values. While machines can be built to process ethical information, the authors argue that human input is still vital at this point in time.  
  • Turiel, E. (2002).* The culture of morality: Social development, context, and conflict. Cambridge University Press.
    • Turiel challenges the common view that extreme individualism and a subsequent lack of community involvement are responsible for the moral crisis in American society, drawing on research from developmental psychology, anthropology, and sociology. Turiel argues that each subsequent generation has attributed decline in society to the actions of young people.

Chapter 12. The Concept of Handoff as a Model for Ethical Analysis and Design (Deirdre K. Mulligan and Helen Nissenbaum)⬆︎

  • Akrich, M., & Latour, B. (1992).* A summary of a convenient vocabulary for the semiotics of human and nonhuman assemblies. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change (pp. 259–264). MIT Press.
    • Structured as a dictionary list illuminated by examples, this article provides a comprehensive semiotic vocabulary for engagement with the topic of human and non-human assemblies. The authors explore the continuum between human and non-human through the description of all as actants, placed into specific categories by framing paradigms. Particular emphasis is placed on the role of observer, context, and perspective in subjective understandings of object, relation, interaction, function, and purpose.  
  • Bansal, K., et al. (2019). HOList: An environment for machine learning of higher order logic theorem proving. In International Conference on Machine Learning (pp. 454-463). PMLR. http://proceedings.mlr.press/v97/bansal19a.html
    • This presents a machine learning oriented, open-source environment for higher-order theorem proving, as well as a neural network-based automated prover that is trained on a large-scale reinforcement learning system. The authors also suggest a benchmark for machine reasoning in higher-order logic. The proposed benchmark includes purely neural network-based baselines that demonstrate strong automated reasoning capabilities, including premise selection from a relatively large and practically relevant corpus of theorems with varying complexity.
  • Barr, N., et al. (2015). The brain in your pocket: Evidence that smartphones are used to supplant thinking. Computers in Human Behavior, 48, 473–480. https://doi.org/10.1016/j.chb.2015.02.029
    • Examining a familiar but perhaps not fully understood example of task handoff, this paper discusses findings that people offload some thinking to technology. In order to adequately characterize human experience and cognition in the modern era, psychology must understand the meshing of mind and media. The authors consider three studies and find that those who think more intuitively and less analytically when given reasoning problems were more likely to rely on their Smartphones (i.e., extended mind) for information in their everyday lives. 
  • Borenstein, J., & Arkin, R. (2016). Robotic nudges: The ethics of engineering a more socially just human being. Science and Engineering Ethics, 22(1), 31–46. https://doi.org/10.1007/s11948-015-9636-2
    • This paper engages with the ethics of “nudge” interactions between human actors and autonomous agents, and whether it is permissible to design these machines to promote “socially just” tendencies in humans. Employing a Rawlsian “principles of justice” framework, the authors explore arguments for and against nudges more broadly, and act specifically to analyze whether robotic nudges are morally or practically different from other kinds of decision architecture. They also put forth ethical principles for those seeking to design such systems.
  • Brownsword, R. (2011).* Lost in translation: Legality, regulatory margins, and technological management. Berkeley Technology Law Journal, 26(3), 1321–1365. https://www.jstor.org/stable/24118672
    • This article discusses the role of regulation and the law in the translation from a traditional legal order (wherein participants can act in a multitude of ways but are normatively constrained by legal rules) to a “technologically managed” order (wherein individuals are restricted to certain actions by the nature of the technology used to carry out those actions). The topic is explored through the lenses of a shift on the part of the regulated party from “moral” to “prudential” motivations for action, and further a shift on the part of the regulation from normative to non-normative purpose. 
  • Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103(3), 513–563. https://digitalcommons.law.uw.edu/faculty-articles/23
    • This article explores the potential implications of cyberlaw. Examining robotics as an “exceptional” technology with the potential to qualitatively and quantitatively shift socio-technical contexts, the author argues that the discipline of cyberlaw (developed in response to the similarly “exceptional” technology of the internet) provides essential insights for responding to the challenges that robots introduce.
  • Cohen, J. E. (2006). Pervasively distributed copyright enforcement. Georgetown Law Journal, 95(1), 1–48. https://scholarship.law.georgetown.edu/facpub/808
    • This article discusses the impact of strategies of “pervasively distributed copyright enforcement,” whereby intellectual property rights holders seek to embed intellectual property enforcement functions within foundational communications networks, protocols, and devices. The author characterizes these attempts as a “hybrid regime” that neither aligns with centralized authority nor with distributed internalized norms. The author explores the observed and potential impacts of this “hybrid regime” on networked society.
  • Coglianese, C., & Lehr, D. (2016). Regulating by robot: Administrative decision making in the machine-learning era. Georgetown Law Journal, 105(5), 1147–1224. https://scholarship.law.upenn.edu/faculty_scholarship/1734
    • This paper engages in critical legal and ethical analysis of the present and future role of machine learning algorithms in decision-making by administrative bodies. The authors examine constitutional and administrative law challenges to the role of autonomous agents in this context; the authors conclude that the use of such agents is likely to be legal but will only be ethical if certain important principles are adhered to.
  • Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society, 5, 40–60. https://doi.org/10.17351/ests2019.260
    • This paper explores the balance of ethical weight within sociotechnical systems through the concept of a “moral crumple zone.” This refers to human actors with ostensible authority (but little meaningful power) over a complex human-machine system who are set up to take disproportionate individual responsibility for failings in systemic structure and design. The author develops this concept by analyzing several high-profile accidents, their antecedent systemic structures, and the subsequent media portrayals of the actors involved. 
  • Flanagan, M., & Nissenbaum, H. (2014).* Values at play in digital games. MIT Press.
    • This book develops a theoretical and practical framework for critically identifying the moral and political values embedded within games. In framing a value-sensitive conception of digital games, the authors discuss how particular values can be incorporated within digital game design.
  • Friedman, B. (1996).* Value-sensitive design. Interactions, 3(6), 16–23. https://doi.org/10.1145/242485.242493
    • This article engages with the argument that values are always both embedded within and emergent from the ways in which tools are built and used. The authors advocate subsequently for principles of “value-sensitive design” wherein designers are explicitly called upon to engage actively and thoughtfully with these values and their implications. The topics of user autonomy and system bias are used as the primary case studies for exploring the concept. 
  • Friedman, B., et al. (2017).* A survey of value sensitive design methods. Foundations and Trends in Human-Computer Interaction, 11(2), 63–125. https://doi.org/10.1561/1100000015
    • This article comprises a broad theoretical and methodological discussion of “value sensitive design” alongside a specific survey of 14 different methods for actualizing the concept.  The authors seek to evaluate each method for its role and usefulness in engaging with a particular aspect of “value sensitive design” in practice, as well as to offer general insights about the core characteristics of the concept of “value sensitive design” overall. 
  • Hasse, C. (2019). Posthuman learning: AI from novice to expert? AI & Society, 34, 355–364. https://doi.org/10.1007/s00146-018-0854-4 
    • This paper adds to the claim that computers and robots will never be able to learn like humans because human learning is uncertain, context sensitive, and intuitive, by stating that human learning builds upon prior learning within a sociocultural, materially grounded, and collective epistemology. The author states that humanlike AI learning is not possible unless, and until, machines ground their epistemologies in sociocultural materiality.
  • Lappin, S., & Shieber, S. M. (2007). Machine learning theory and practice as a source of insight into universal grammar. Journal of Linguistics, 43, 393–427.   https://doi.org/10.1017/S0022226707004628 
    • This paper examines whether and how machine learning approaches to natural language processing might provide specific insights into the nature of human language. The authors state that while it is uncontroversial that the learning of a natural language (or of anything else) requires assumptions concerning the structure of the phenomena being acquired, machine learning can have a role in demonstrating the viability of particular language models as learning mechanisms. To the extent that the bias of a successful model is defined by a comparatively weak set of language-specific conditions, the authors state, task-general machine learning methods might be drawn upon to explain the possibility of acquiring linguistic knowledge.
  • Hernández-Orallo, J., & Vold, K. (2019). AI extenders: The ethical and societal implications of humans cognitively extended by AI. In AIES ’19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 507–513). Association for Computing Machinery. https://doi.org/10.1145/3306618.3314238 
    • Observing that there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans, this paper considers that under the extended mind thesis, the functional contributions of these tools become essential to human cognition. This cognitive extension poses new philosophical, ethical, and technical challenges. To analyze these challenges, the authors define and place “AI extenders” on a continuum between fully externalized systems and fully internalized processes, where the extender becomes redundant within operations performed by the brain. Dissecting the cognitive capabilities that can foreseeably be extended by AI, and examining their potential ethical implications, the authors suggest that cognitive extenders using AI should be treated as distinct from other cognitive enhancers.
  • Huang, M.-H. & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172. https://doi.org/10.1177/1094670517752459  
    • Identifying categories of human-AI task handoff, this paper presents a theory of AI-human job replacement. The theory specifies four intelligences required for service tasks (mechanical, analytical, intuitive, and empathetic) and lays out ways that firms could decide how to assign specific tasks to humans and/or machines. The authors state that AI is developing in a predictable order, with mechanical task capacity mostly preceding analytical task capacity, analytical mostly preceding intuitive, and intuitive mostly preceding empathetic intelligence contexts. AI first replaces some of a service job’s tasks, a transition stage seen in terms of augmentation, and then progresses in some cases to replace human labor entirely. Implications of this theory point to AI replacement of humans in certain tasks, with other tasks becoming sites of innovative human–machine integration. 
  • Joh, E. E. (2016). Policing police robots. UCLA Law Review Discourse, 64, 516–543. https://www.uclalawreview.org/policing-police-robots/
    • This paper examines the potential impacts of artificially intelligent robots on policing through legal and ethical lenses. The author analyzes arguments in favor of and against the adoption of robots by police agencies, arguing that these case studies raise deeper questions about police decision-making that have not yet been systematically or effectively addressed.
  • Kroll, J. A. (2021). Outlining traceability: A principle for operationalizing accountability in computing systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 758-771). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445937
    • This article reframes existing discussions around traceability as a principle for operationalizing accountability in computing systems. Traceability requires establishing not only how a system works, but how and for what purpose it was created. Explaining why a system exhibits particular behaviors connects how a system was constructed to the broader goals of system governance in a way that highlights human understanding of a system’s mechanical operation and the decision processes underlying it. 
  • Lake, et al. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, 253. https://doi.org/10.1017/S0140525X16001837
    • This paper suggests that in order to build machines that truly think and learn like people, developers must move beyond current engineering trends. Despite biological inspiration and performance achievements, the authors state, neural networks differ from human intelligence in crucial ways. The authors argue that learning systems should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn elements to rapidly acquire and generalize knowledge to new tasks and situations.  
  • Latour, B. (1992).* Where are the missing masses? The sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change (pp. 225–258). MIT Press.
    • This chapter engages with the “technological determinism/social constructivism dichotomy” through the concept of the “actor network approach.” This approach seeks to emphasize the bidirectionality of the interactions between social actors and technological actors in sociotechnical systems, arguing that physical structure and design of the material world acts to shape and limit the boundaries of its social construction. With a focus upon “mundane artifacts,” the author explores the ways in which technologies act to influence the thoughts and decisions of human actors. 
  • Lessig, L. (2009).* Code: And other laws of cyberspace. Basic Books.
    • This book engages in a comprehensive discussion of the structure and regulation of the internet, with a focus upon the impact of the four forces of “Law, Norms, Market, and Architecture.” In particular, the author argues that the computer code which defines the structure and function of the internet acts to shape and regulate the conduct of its users in much the same way that traditional regulatory instruments such as legal codes do. 
  • Liu, J., et al. (2020). Time to transfer: Predicting and evaluating machine-human chatting handoff. arXiv:2012.07610v1 
    • Addressing the question of how easily a trained chatbot might replace a human agent in the case of human-algorithm task collaboration, this paper reports experimental results in which the efficacy of a proposed model upon Machine-Human Chatting Handoff is contrasted with a series of baseline models. The authors propose a Difficulty-Assisted Matching Inference network, utilizing difficulty-assisted encoding to enhance the representations of utterances. Further, a matching inference mechanism is introduced to capture contextual matching features. New datasets generated by this work point to future measurement of efficacy within the reverse-handoff task, or handoff from the human agent to the machine.
  • Neff, G., & Nagy, P. (2016). Automation, algorithms, and politics | Talking to bots: Symbiotic agency and the case of Tay. International Journal of Communication, 10, 4915–4931. https://ijoc.org/index.php/ijoc/article/view/6277
    • This paper considers Tay, an experimental artificial intelligence chatbot that Microsoft launched in 2016. In Tay’s case, a group of organized users and a platform-specific culture turned code that functioned well in other contexts into an embarrassment for the designers who produced it; Tay learned from and echoed the obscene and inflammatory tweets that were fed into it. Using phenomenological research methods and pragmatic approaches to agency, the authors look at what users said about Tay to gage how users imagine and interact with emerging technologies. This examination, the authors’ state, shows the limitations of current theories of agency for describing communication handoff in these settings. The authors argue that a perspective of “symbiotic agency,” informed by the imagined affordances of emerging technology, is required to understand human-algorithmic communication.
  • Radin, M. (2004).* Regulation by contract, regulation by machine. Journal of Institutional and Theoretical Economics, 160(1), 142–156. https://www.jstor.org/stable/40752447
    • The article concerns the impacts of mass standardized contracts and digital rights management systems on how property and contract law regulate intellectual property. The author examines the impacts of these technologies on the underlying knowledge-generation incentives of intellectual property, on the distinction between waivable rules and inalienable entitlements, and on the role of legislative approval of “regulation by machine.”
  • Radziwill, N., & Benton, M. (2017). Evaluating quality of chatbots and intelligent conversational agents. Software Quality Professional, 19(3), 25.
    • This paper provides an overview of the academic literature since 1990 and industry articles since 2015, that gather and articulate quality attributes for chatbots and conversational agents and synthesize quality assessment and assurance approaches. The authors propose and examine an Analytic Hierarchy Process (AHP) as a structured approach for navigating complex decision-making processes that involve both qualitative and quantitative considerations.
  • Schaub, G., Jr. (2019). Controlling the autonomous warrior: Institutional and agent-based approaches to future air power. Journal of International Humanitarian Legal Studies, 10(1), 184–202. https://doi.org/10.1163/18781527-01001007
    • Working through both institution-centric and agent-centric lenses, this article engages with the legal and ethical challenges posed by the handoff of lethal power to increasingly autonomous weapons systems. The author argues that artificial intelligence is not unprecedented in its ability to change the structure of warfare and contends that past work in understanding the ethical and legal relationships between principals and agents may be effectively adapted to characterizing and addressing these new challenges.
  • Shilton, K., et al. (2014).* How to see values in social computing: Methods for studying values dimensions. In CSCW ’14: Computer Supported Cooperative Work and Social Computing (pp. 426–435). https://terpconnect.umd.edu/~kshilton/pdf/ShiltonCSCW2014preprint.pdf
    • This article presents a framework for understanding the nature and role of values in sociotechnical systems. The authors advocate for the theoretical characterization of values upon a system of “source dimensions” (describing the origins of values) and “attribute dimensions” (describing the traits of values). In relation to this framework, the authors examine the effectiveness of different lenses, such as ethnographies and content analyses, by which to study values in social computing.
  • Surden, H. (2007).* Structural rights in privacy. SMU Law Review, 60(4), 1605-1632. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1004675
    • This paper asserts that privacy rights are not regulated explicitly by the law, but rather are implicitly and primarily regulated by the presence of latent structural constraints that impose transaction costs upon the violation of privacy. Substantial components of privacy become vulnerable, the author states, as technology acts to reduce the magnitude of these structural constraints; the author suggests a conceptual framework for identifying and responding to specific contexts of such vulnerability.
  • Susser, D., et al. (2019). Technology, autonomy, and manipulation. Internet Policy Review, 8(2). https://www.doi.org/10.14763/2019.2.1410
    • This article explores the “online manipulation” that is alleged to occur when powerful technology companies use algorithms to shape online experiences. The authors argue that such practices may be harmful both consequentially (in their impacts on the ethical and economic interests of users and society at large), and deontologically (indirectly threatening individual autonomy), as they aim to evoke specific behaviors in the user. The authors situate their discussion within examination of the Cambridge Analytica and Facebook scandal, and within the broader issue of election manipulation.
  • Umbrello, S., & De Bellis, A. F. (2018). A value-sensitive design approach to intelligent agents. In R. Yampolskiy (Ed.), Artificial Intelligence Safety and Security (pp. 395–410). CRC Press.
    • This chapter discusses the methodology of “value-sensitive design” and its implications for the design and implementation of artificially intelligent systems. In seeking to identify opportunities and limits in adapting value-sensitive design to the specific challenge of working with AI, the authors argue that value sensitivity must be proactively embedded throughout the entire AI development process.
  • Wang, D., et al. (2021). How much automation does a data scientist want? arXiv:210103970v1
    • This paper documents an IBM research team’s findings. The team proposed a human-in-the-loop AutoML framework with four dimensions: roles, stages, levels of automation, and types of explanation and used the framework to design a large-scale online survey to gather usage perspectives from data science and machine learning practitioners. The authors discovered a notable gap between the automation level in people’s current work practice and the future automation level that they prefer. However, the authors state, such research and development efforts should be directed to meet the specific needs of various user personae, as the level of automation and the type of explanation can vary, depending, e.g., upon the user, which lifecycle stage the user works in, and what the task is. The authors, therefore, discourage a fully automated data science and machine learning focus, preferring a human-in-the-loop explainable system.
  • Winner, L. (1980).* Do artifacts have politics? Daedalus, 109(1), 121–136. https://www.jstor.org/stable/20024652
    • This article argues that as power relations are embodied within technologies, artifacts themselves are imbued with politics. In support of this thesis, the author discusses instances in which a specific technical device becomes a way of settling an issue in a particular community, and thereby acts to shape the power relations within that community. Secondly, the author contends that some technologies are inherently political in that they either require or are strongly compatible with certain kinds of political relationships.
  • Zerilli, J., et al. (2019). Algorithmic decision-making and the control problem. Minds and Machines, 29(4), 555–578. https://doi.org/10.1007/s11023-019-09513-7
    • This paper discusses the “control problem,” wherein it is difficult for human actors to maintain meaningful oversight and control of largely automated systems. The authors build on a body of industrial-organizational psychology work and extend the topic to modern algorithmic actors, offering both a theoretical framework for understanding the problem and a series of design principles for overcoming it in human-machine systems.

Chapter 13. Race and Gender (Timnit Gebru)⬆︎

  • Amrute, S. (2019). Of techno-ethics and techno-affects. Feminist Review, 123(1), 56–73. https://doi.org/10.1177/0141778919879744  
    • This article considers the current state of digital labor conditions and identity formation, including uneven geographies of race, gender, class, ability, and histories of colonialism and inequality. The author highlights specific cases in which digital labor frames embodied subjects and proposes new ways in which digital laborers might train themselves to be empowered to identify emergent ethical concerns, using the concept of attunement as a framework for care. Predictive policing, data mining, and algorithmic racism are discussed, as is the urgency to include digital laborers in the design and analysis of algorithmic technologies and platforms. 
  • Angwin, J., et al. (2016).* Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    • This investigative report documents and analyzes racial bias against black defendants from algorithmic criminal risk score systems, such as COMPAS, used by courts and parole boards in the United States to forecast future criminal behavior. The authors describe how the algorithmic formulas, and others like it, were written in a way that promotes racial disparity, resulting in black defendants being inaccurately identified as future criminals more frequently than white defendants. The report heavily implies that bias is inherent in all actuarial risk assessment instruments (ARAI), and that widespread audits and reassessments are necessary. 
  • Atanasoski, N., & Vora, K. (2019). ​Surrogate humanity: Race, robots, and the politics of technological futures​. Duke University Press. https://www.dukeupress.edu/Assets/PubMaterials/978-1-4780-0386-1_601.pdf
    • This book traces the ways in which robots, artificial intelligence, and other technologies, serve as surrogates for human workers within a labor system defined by racial capitalism and patriarchy. The authors analyze technologies including sex robots, military drones, and sharing-economy platforms to illustrate how liberal structures of antiblackness, settler colonialism, and patriarchy are fundamental to human and machine interactions. Through a critical feminist STS analysis of contemporary digital labor platforms, the authors address the global racial and gendered erasures underlying techno-utopian fantasies of a post-labor society and consider the definitions of what it means to be a human.
  • Benjamin, R. (2019).* Race after technology: Abolitionist tools for the New Jim Code. John Wiley & Sons. https://www.ruhabenjamin.com/race-after-technology
    • Using critical race theory, this book analyzes how current technologies can and have reinforced White supremacy and increased social inequalities. The concept of The New Jim Code is introduced as a means of describing how a wide range of discriminatory designs can: 1. encode inequity by amplifying racial hierarchies, 2. ignoring and replicating social divisions, and 3. inadvertently reinforcing racial biases while intending to ‘fix’ them. This book concludes with an overview of conceptual strategies, including tech activism and abolitionists tools, that might be used to disrupt and rectify current and future technological design.  
  • Bolukbasi, T., et al. (2016).* Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems (pp. 4349-4357). https://proceedings.neurips.cc/paper/2016/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html
    • This article examines the presence of gender bias within the popular framework of word embedding, which represents text data as vectors, used in many machine-learning and natural language processing tasks. The authors found that gender bias and stereotyping, in line with greater societal bias, is common in many word embedding models, even those trained on large data sets, such as Google News articles. The article provides an algorithmic-based methodology for modifying embeddings to remove gender stereotypes, while maintaining desired associations. 
  • Broussard, M. (2018).* Artificial unintelligence: How computers misunderstand the world. MIT Press. https://doi.org/10.7551/mitpress/11022.001.0001
    • This book describes society’s relationship with technology in the contemporary moment, taking a critical stance on how much computers are relied upon for daily tasks. This reliance, the author states, has prompted an overproduction of poorly designed and harmful systems. Through a series of interactions with current technologies, such as driverless cars and machine learning models, the author defines limits for which technology should and should not be applied, arguing against the prevalent framework of technochauvism, which upholds that technology is the solution to any and all problems. 
  • Buolamwini, J., & Gebru, T. (2018).* Gender shades: Intersectional accuracy disparities in commercial gender classification. In First Conference on Fairness, Accountability and Transparency (pp. 77-91). http://proceedings.mlr.press/v81/buolamwini18a.html
    • This conference paper investigates race and gender discrimination in machine learning algorithms, presenting an approach to the evaluation of bias in automated facial analysis algorithms and datasets with respect to the identification of phenotypic subgroups. The authors conclude that the darker-skinned females within their datasets were the most misclassified group, indicating substantial disparities in the accuracies of classifying individuals with varying skin types. As the authors stress, such biases require immediate attention in order to ensure fair, transparent, and accountable facial analysis algorithms are built into commercial technologies. 
  • Chun, W. H. K. (2009). Introduction: Race and/as technology; Or, how to do things to race. Camera Obscura, 70(24). https://doi.org/10.1215/02705346-2008-013
    • This article discusses the interconnections between race and technology, discussing the various ways in which race can be defined and operationalized through societal and cultural understandings. Framing her discussion in past and current critical theory, the author describes race as a technique that is carefully constructed through a historical understanding of tools, mediation, and framings, that build identity and history. In conclusion the author states that in order to disrupt the concept of race, those of nature/culture, privacy/publicity, self/collective, and media/society, need to be reframed as well. 
  • de la Peña, C. (2010). The history of technology, the resistance of archives, and the whiteness of Race. Technology and Culture, 51(4), 919–937. https://muse.jhu.edu/article/403272/pdf  
    • Using the technological development of the X-Ray and artificial sweeteners as case studies, the author outlines the issue in the ‘Whiteness’ of the official archives, noting that significant contributions by members of marginalized races and genders have been left out of the record, and that documents and data supporting these stories continue to be elusive to those managing the archives. The book concludes by emphasizing the need for a shift in the perception that race is occluded from the archive, rather than the archive being constructed around the concept of whiteness. 
  • D’Ignazio, C., & Klein, L. F. (2020). Data Feminism. MIT Press.
    • This book presents principles for a feminist approach to data science. First, the authors propose an intersectional lens focused on the matrix of domination in order to analyze data science as a form of power relations. The authors argue that multiple forms of knowledge are needed for the field of data science to engage critically with the gender binary and other forms of classification. Furthermore, the book highlights that data is not neutral and objective and a feminist interpretation of it therefore requires a plurality of worldviews and a contextual analysis.  The book concludes by pointing to the, often invisible, labour needed to create, transform, and maintain and calling for these workers to receive more dignified treatment.
  • Dubal, V. B. (2020). The Time Politics of Home-Based Digital Piecework. C4eJournal: Perspectives on Ethics, The Future of Work in the Age of Automation and AI Symposium. [2020 C4eJ 50] [20 eAIj 10]. 
    • In this paper, Dubal focuses on the outsourced labour required to annotate machine learning algorithms performed by many workers from their homes through online platforms. The author draws a parallel between this practice and earlier forms of piecework, where factories would outsource some of their labour to women and children who would work from their homes. In her analysis, the author argues that, in this continuing form of piecework, time becomes an “invisible node of power” that shapes how algorithms are constructed and maintained. 
  • Eubanks, V. (2018).* Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. https://virginia-eubanks.com/books/
    • Considering the historic context of austerity, this book documents the use of digital technologies for distributional decision-making for social service delivery to poor and disadvantaged populations in the United States. Using ethnographic and interview methods, the author investigates the impact of automated systems such as Medicaid and Temporary Assistance for Needy Families, and electronic benefit transfer cards, stating that such systems, while expensive, are often less effective, and regularly reproduce and aggravate bias, equity disparities, and state surveillance of the poor. The author speaks to legacy system prejudice and the ‘social specs’ that underlie our decision-systems and data-sifting algorithms and offers a number of participatory design solutions including empathy through co-design, transparency, access, and control of information. 
  • Gangadharan, S. P. (Ed.). (2014). Data and discrimination: Collected essays. Open Technology Institute, New America Foundation. https://www.newamerica.org/oti/data-and-discrimination/ 
    • This book brings together work from eighteen researchers from various backgrounds looking at discriminatory impacts of big data and algorithms. Three themes are discussed: 1. Discovering and responding to harms, 2. Participation, presence, and politics, and 3. Fairness, equity, and impact. Many of the authors in this collection remark that there is a gap in public awareness of the extent to which algorithms influence their daily lives. 
  • Gebru, T., et al. (2018). Datasheets for datasets. arXiv:1803.09010
    • This paper proposes datasheets to document the creation, use, and transformation of datasets for machine learning. One of the AI industry’s problems is that the origins of datasets are not documented, making it difficult to assess the ethics of how they have been collected. The authors of the paper hope that by documenting datasets using datasheets, AI practitioners will provide more transparency in the process of algorithmic development.
  • Hamidi, F., et al. (2018).* Gender recognition or gender reductionism?: The social implications of embedded gender recognition systems. In Proceedings of the ACM 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13). http://doi/10.1145/3173574.3173582
    • This article investigates the social implications of automatic gender recognition (AGR) computational methods within the transgendered community. The authors interview thirteen transgendered individuals, including three technology designers, to document current perceptions and attitudes towards AGR. The article concludes that transgendered individuals have strong negative attitudes towards AGR, questioning whether it can be used to accurately identify their gender. Privacy and potential harms are discussed with respect to the impacts of being mis-identified, the authors include design recommendations to accommodate gender diversity.
  • Hamilton, A. M. (2020). A genealogy of critical Race and digital studies: Past, present, and future. Sociology of Race and Ethnicity, 6(3), 292⁠–301. https://doi.org/10.1177/2332649220922577
    • In this literature review, Hamilton retraces recent developments in critical race theory and digital studies. She argues that internet companies and their products have taken a colour-blind approach to racism, sexism, and other forms of discrimination. Furthermore, Hamilton argues that early publications on digital studies have focused on a digital divide to account for access to technology. The review focuses on how a critical race approach to digital studies allows for the analysis of existing inequalities in technology that have been rendered invisible by previous colour-blind approaches.
  • Hicks, M. (2017).* Programmed inequality: How Britain discarded women technologists and lost its edge in computing. MIT Press. http://programmedinequality.com/
    • This book describes the history of feminized and gendered labor practices within Britain’s computer industry. Drawing from government files, personal interviews, and archives from the central British computing companies, the author describes how the neglect of the female labor force contributed to the industry’s short run from 1944-1974. The book concludes by describing how gendered discrimination persists in the computing industry, leading to many women’s abandonment of the field, and compares the historic economic conditions in Britain to the current state of the industry in the United States. 
  • Jasanoff, S. (Ed.). (2006). States of knowledge: The co-production of science and social order. Routledge. https://sheilajasanoff.org/research/co-production/
    • A collection of essays by leading scholars in the field of science and technology studies (STS), outlining various papers discussing the relationships between political power and scientific knowledge. Central themes include ‘co-production’ describing how scientific knowledge is linked to understanding about social identity, institutions, discourse and representation; and critiques of the ‘view from nowhere,’ largely associated with traditional ontology and philosophies of science.
  • Lewis, J. E., et al. (2018). Making kin with the machines. Journal of Design and Science​.​ ​https://doi.org/10.21428/bfafd97b
    • This article considers artificial intelligence through diverse Indigenous epistemologies, reflecting on traditional ways of knowing and speaking that acknowledge kinship networks connecting humans and nonhuman entities. As the author states, Indigenous communities have retained language and protocols to enable dialogue with non-human kin (such as AI), encouraging intelligible discourses across different materialities. Indigenous development environments (IDEs) are presented as a framework instituting Indigenous cultural values as fundamental aspects of all programming choices in order to instill greater public accountability into the design of AI systems.
  • Noble, S. U. (2018).* Algorithms of oppression: How search engines reinforce racism. NYU Press. https://nyupress.org/9781479837243/algorithms-of-oppression/
    • This book discusses how search engines, such as Google, are embedded with racial and sexist bias, challenging the notion that they are neutral algorithms acting outside of influence from their human engineers, and emphasizing the greater social impacts created through their design. Through an analysis of text and media searches, and research on paid advertising, the author argues that the monopoly status of a small group of companies alongside vested private interests in promoting some sites over others, has led to biased search algorithms that privilege whiteness and exhibit bias against people of color, particularly women.
  • O’Neil, C. (2016).* Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group. https://doi.org/10.5860/crl.78.3.403
    • This book describes how algorithms, as mathematical models, are responsible for a large number of our daily decisions — from car loans to health insurance to students’ grades. However, these decision processes remain largely opaque and unregulated. In addition to this, the author argues, reigning societal faith in the fairness of mathematical systems, makes resistance very challenging when errors and discriminatory decision-making occurs. The author concludes with a call for greater responsibility with respect to regulation and algorithmic transparency.
  • Paullada, A., et al. (2020). Data and its (dis)contents: A survey of dataset development and use in machine learning research. NeurIPS 2020 Workshop: ML Retrospectives, Surveys & Meta-Analyses (ML-RSA), Virtual. https://ml-retrospectives.github.io/neurips2020/camera_ready/19.pdf
    • This workshop paper focuses on the origins of datasets for machine learning. These artificial intelligence algorithms learn from datasets which often have unknown origins. The authors highlight four concerns related to this issue: (1) that social minorities and peoples from developing countries are not represented in the data, (2) that ML models use “shortcuts” to solve problems without striving for “reasoning capabilities,” that (3) some unnecessary problems are prioritized over others, and (4) that datasets are collected in unethical and dubious ways.
  • Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429–435. https://doi.org/10.1145/3306618.3314244
    • This article studies the impact of the seminal work of Gender Shades, an algorithmic audit of race and skin in facial recognition commercial applications. In this paper, Raji and Buolamwini evaluate the commercial applications from companies IBM, Microsoft, Megvii, Amazon, and Kairos. Overall, they found that these companies acted in response to the Gender Shades audit, releasing new APIs and improving their metrics at different degrees. This evaluation suggests that critical studies of algorithms eventually provide substantial changes to company policy.
  • Roberts, S. T. (2016). Commercial content moderation: Digital laborers’ dirty work. In S. U. Noble & B. M. Tynes (Eds.), The Intersectional Internet: Race, Sex, Class, and Culture Online. Peter Lang. https://doi.org/10.3726/978-1-4539-1717-6
    • This book chapter focuses on the racialized nature of commercial content moderation. The author argues that technology companies direct workers to be more lenient towards profitable content, that is content which engages users more, even when it may be experienced as racist by the workers. The latter part of the chapter centres on the ways content moderation workers sometimes circumvent these policies, demonstrating how their work is critical in ensuring that content online is safe for users.
  • Schiller, A., & McMahon, J. (2019). Alexa, alert me when the revolution comes: Gender, affect, and labor in the age of home-based artificial intelligence. ​New Political Science,​ ​41(​2), 173–191.​ ​https://doi.org/10.1080/07393148.2019.1595288
    • This article uses Marxist feminism and theories of labor to interrogate gender, race, and affect within domestic artificial intelligence systems, such as Amazon’s Alexa or Google Home Assistant. The author describes how such devices make reproductive labor in households more visible, while simultaneously obscuring the gendered and racialized dimensions of their designs in order to streamline their effects for capital and heighten the affective dynamics they draw from.
  • Stitzlein, S. M. (2004).* Replacing the ‘view from nowhere’: A pragmatist-feminist science classroom. Electronic Journal of Science Education.
    • This article takes a critical stance on current pedagogical models of science adhering to traditional, objective and empirical ‘nature-based’ philosophical models. Such frameworks are considered by the author to be problematically masculine, disembodied, and aperspectival. The author adopts a sociological methodology, analyzing teachers’ philosophies of science by studying classroom practices. An alternative pedagogical model based on pragmatic-feminism and intersectionality of a ‘lived world’ is proposed in response to the outdated, traditional ‘view from nowhere.’
  • Van Doorn, N. (2017). Platform labor: On the gendered and racialized exploitation of low-income service work in the ‘on-demand’ economy. Information Communication and Society, 20(6), 898–914. https://doi.org/10.1080/1369118X.2017.1294194
    • This paper is centred on the divisions based on race, gender, and class present in the digital on-demand economy. The author argues that platforms are a central player in the current economy because of their ability to profit from neoliberal ideologies, algorithmic control, and the vulnerability of populations who have been oppressed because of their gender and racial identities, and social class. The author ends this paper by highlighting the potential of the platform cooperativism movement to empower vulnerable workers in this economy. 
  • West, S. M., et al. (2019).* Discriminating systems: Gender, race and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html  
    • The first report in the AI Now Institute’s multi-year project examining race, gender, and power in AI, presents a review of existing literature and current research on the topic of gender, race, and class. The report focuses on examining the scale of AI’s current diversity crisis and possible future strategies to mitigate its effects. The diversity problem within the AI industry and issues of bias in AI systems tend to be considered as separate issues, however, as this report points out, discrimination in the workforce and system building are intrinsically linked and will both need to be addressed in order to design an effective solution.

Chapter 14. The Future of Work in the Age of AI: Displacement or Risk-Shifting? (Pegah Moradi and Karen Levy)⬆︎

  • Acemoglu, D., & Restrepo, P. (2018). The race between man and machine: Implications of technology for growth, factor shares, and employment. American Economic Review, 108(6), 1488–1542.
    • This paper examines concerns that new technologies, such as artificial intelligence (AI), will render labour redundant. The authors propose a framework where, when certain tasks become automated, new, more complicated tasks—in relation to which human labour has a comparative advantage—are introduced. The authors argue that if this comparative advantage is significant and the creation of new tasks continues, employment can remain stable even in the face of rapid automation.
  • Anteby, M., & Chan, C. K. (2018). A self-fulfilling cycle of coercive surveillance: Workers’ invisibility practices and managerial justification. Organization Science, 29(2), 247–263.
    • This paper outlines an endogenous explanation for the growth of surveillance in the workplace. The authors argue that increasing surveillance in the workplace leads to attempts by employees to go unseen and remain unseen. Management, in turn, interprets these attempts as justification for more surveillance, thus creating a self-fulfilling cycle.
  • Autor, D. H., et al. (2003).* The skill content of recent technological change: An empirical exploration. The Quarterly Journal of Economics118(4), 1279-1333. https://doi.org/10.1162/003355303322552801
    • This article argues that computers can substitute for workers in performing cognitive and manual tasks that can be accomplished by following explicit rules and complement workers in performing nonroutine problem solving and complex communications tasks. It demonstrates that the falling price of computer capital in recent decades has been the causal force increasing the demand for workers who can perform nonroutine tasks (i.e. college-educated) has increased. 
  • Ball, K. (2010). Workplace surveillance: An overview. Labor History51(1), 87-106. https://doi.org/10.1080/00236561003654776
    • This article reviews research findings about surveillance in the workplace and the issues surrounding it. It establishes that organizations and surveillance go hand in hand, and that workplace surveillance can take social and technological forms. Further, it identifies that workplace surveillance has consequences for employees, affecting employee well-being, work culture, productivity, creativity and motivation. It also however highlights that employees are using information technologies to expose unsavory practices by employers and organizing collectively.
  • Braverman, H. (1998).* Labor and monopoly capital: The degradation of work in the twentieth century. NYU Press.
    • This book is an analysis of the science of managerial control, the relationship of technological innovation to social class, and the eradication of skill from work under capitalism. The book started what came to be known as the “the labor process debate”, which focuses closely on nature of “skill” and the decline in the use of skilled labor as a result of managers strategy for control. 
  • Brynjolfsson, E., et al. (2018).* What can machines learn, and what does it mean for occupations and the economy? In AEA Papers and Proceedings, 108, 43-47.
    • This paper aims to answer the question of which occupational tasks will be most affected by machine learning (ML). Using a rubric evaluating tasks’ suitability for ML and applying it to over 18000 tasks, the paper finds that ML affects different occupations compared to previous waves of automation, most occupations have at least some tasks suitable for ML, few occupations are fully automatable using ML, and that realizing the potential of ML usually requires redesign of job task content.
  • Chui, M., et al. (2015, November). Four Fundamentals of Workplace Automation. McKinsey Digital. https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/four-fundamentals-of-workplace-automation
    • This report argues that automation will lead to the redefinition of jobs rather than their replacement, and that this redefinition has occurred repeatedly during previous periods of rapid technological change. Adding to the conventional paradigm that low-skill, low-wage activities are most susceptible to automation, this report suggests that a significant percentage of the activities performed by even those in the highest-paid occupations (for example, financial planners, physicians, and senior executives) can be automated by current technology. 
  • David, H. (2015).* Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives29(3), 3-30.
    • This article argues that while automation can substitute human labor, it also complements it, increasing productivity and labor demand overall. Changes in technology may alter which jobs are available, and what those jobs pay. The author concludes that automation should be thought of as replacing workers in performing routine, codifiable tasks while amplifying the advantage of workers in supplying problem-solving skills, adaptability, and creativity.
  • Dickens, W. T., et al. (1989). Employee crime and the monitoring puzzle. Journal of Labor Economics7(3), 331-347. https://doi.org/10.1086/298211
    • This paper investigates reasons why firms actually spend considerable resources trying to monitor for employee malfeasance, despite most economic theories of crime predicting that profit-maximizing firms should follow strategies of minimal monitoring with large penalties for employee crime. It finds that the most plausible explanations for firms’ spending and focus on monitoring of employees are legal restrictions on penalties in contracts, and the adverse impact of harsh punishment schemes on worker morale. 
  • Doleac, J. L., & Hansen, B. (2016). Does “ban the box” help or hurt low-skilled workers? Statistical discrimination and employment outcomes when criminal histories are hidden (No. w22469). National Bureau of Economic Research.
    • New ‘ban the box’ (BTB) policies prevent employers from conducting criminal background checks until late in the job application process to improve employment outcomes for those with criminal records and reduce racial disparities in employment. This paper tests BTB’s effects and finds that BTB policies actually decrease the probability of being employed by 5.1% for young, low-skilled black men, and by 2.9% for young, low-skilled Hispanic men. The paper argues that when an applicant’s criminal history is unavailable, employers still discriminate against demographic groups that they believe are likely to have a criminal record.
  • Frank, M. R., et al. (2019). Toward understanding the impact of artificial intelligence on labor. Proceedings of the National Academy of Sciences, 116(14), 6531–6539.
    • This paper highlights the existence of barriers which currently inhibit scientists from measuring the effect of artificial intelligence (AI) and automation on the future of work. These barriers include a lack of access to high-quality data and empirically informed models about the nature of work, and an insufficient understanding of how cognitive technologies interact with broader economic dynamics and institutional mechanisms. The paper concludes by arguing for the development of a decision framework for the future of work that is focused on resilience to unexpected scenarios.
  • Fantini, P., et al. (2020). Placing the operator at the centre of Industry 4.0 design: Modelling and assessing human activities within cyber-physical systems. Computers & Industrial Engineering, 139, 105058. https://doi.org/10.1016/j.cie.2018.01.025
    • This paper argues that a challenge of the so-called “Industry 4.0” will be guiding work towards increased responsibility and decision-making for employees as opposed to increased technological control. The authors then propose a methodology to address this challenge by considering both the uniqueness of human labour and the characteristics of “cyber-physical production.” 
  • Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation?. Technological Forecasting and Social Change114, 254-280. https://doi.org/10.1016/j.techfore.2016.08.019
    • In this paper, the authors calculate probabilities of computerization for 702 occupations using data about the task content of those jobs from the Department of Labor and have artificial intelligence experts code the tasks for automation potential. The study estimates that 47% of US jobs are at high risk of automation within approximately twenty years. The article shows that wages and educational attainment exhibit a strong negative relationship with an occupation’s automation potential.
  • Granulo, A., et al. (2019). Psychological reactions to human versus robotic job replacement. Nature Human Behaviour, 3(10), 1062–1069.
    • This paper explores people’s psychological reactions to the technological replacement of human labour. They find that while people prefer when human workers are replaced by other human workers, their preference reverses when they consider the prospect of their own job loss. In light of their findings, the authors posit that the unique psychological consequences of technological replacement of human labour should be taken into account by policy measures.
  • Gray, M. L., & Suri, S. (2019).* Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
    • In this book, the concept of “ghost work” is discussed, which refers to work done behind the scenes by an invisible human labor force that provides the internet and services by big tech companies with the appearance of smooth and “intelligent” function, through tasks such as flagging inappropriate content, proofreading, etc. The book explores problematic aspects of this growing sector including the lack of labor laws, precarity, lack of benefits, illegally low earnings, and more. 
  • Helm, S., et al. (2018). Navigating the ‘retail apocalypse’: A framework of consumer evaluations of the new retail landscape. Journal of Retailing and Consumer Services. https://doi.org/10.1016/j.jretconser.2018.09.015
    • This paper explores U.S. consumers’ evaluations of ongoing changes to the retail environment through content analysis of reader comments in response to articles on large-scale store closures, and online consumer interviews. The paper finds many consumers lamenting the disappearance of physical retailers, expecting negative consequences for themselves and society. However, other consumers are also accepting of a future with very few physical stores. 
  • Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172. https://doi.org/10.1177/1094670517752459
    • This paper develops a theory of job replacement by artificial intelligence (AI) that specifies four intelligences: mechanical, analytical, intuitive, and empathic. The authors contend that AI is developing in a predictable order: mechanical preceding analytical, analytical preceding intuitive, and intuitive preceding empathic. Based on this ordering, the authors argue that the importance of “softer” (i.e. more intuitive and empathic) skills will become more important as AI continues to take over more analytic tasks.  
  • Kellogg, K. C., et al. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410.
    • This paper explores how the widespread implementation of algorithmic technologies is reshaping organization control. The authors argue that algorithmic control in the workplace operates through six main mechanisms: direction (restricting and recommending), evaluation (recording and rating), and discipline (replacing and rewarding). Finally, the paper comments on a set of emerging tactics the authors call “algoactivism,” described as the resistance of algorithmic control by workers.
  • Kelley, M. R. (1990). New process technology, job design, and work organization: A contingency model. American Sociological Review, 55(2), 191-208. https://doi.org/10.2307/2095626
    • This paper aims to identify the conditions under which occupational skill upgrading occurs with technological change to answer the question of how workplaces that permit blue-collar occupations to take on higher skill responsibilities differ from those that do not. Data analyzed from a national survey of production managers in 21 industries reveals that the least complex organizations (small plant, small firm) tend to offer the greatest opportunities for skill upgrading, independent of techno-economic conditions. 
  • Levy, F. (2018). Computers and populism: Artificial intelligence, jobs, and politics in the near term. Oxford Review of Economic Policy34(3), 393-417. https://doi.org/10.1093/oxrep/gry004
    • This paper examines the future of work in the next few years to examine whether job losses induced by artificial intelligence will increase the appeal of populist politics. The paper explains that often computers and machine learning automate workplace tasks of blue collar workers. Using the example of automation-related job losses in three industries (trucking, customer service, and manufacturing), the paper examines how candidates may pit ‘the people’ (truck drivers, call center operators, factory operatives) against ‘the elite’ (software developers, etc.), replicating populist politics of the 2016 US presidential election. 
  • Levy, K., & Barocas, S. (2018).* Refractive surveillance: Monitoring customers to manage workers. International Journal of Communication12, 1166-1188.
    • This article discusses ‘refractive surveillance’, which is when information collected about one group can facilitate control over an entirely different group. The authors explore this dynamic in the context of retails stores, in which collecting data about customers allows for new form of managerial control over workers. Mechanisms enabling this are dynamic labor scheduling, new forms of evaluation, externalization of worker knowledge, and replacement through customer self-service. 
  • Moniz, A. B., & Krings, B. J. (2016). Robots working with humans or humans working with robots? Searching for social dimensions in new human-robot interaction in industry. Societies, 6(3), 23. https://doi.org/10.3390/soc6030023
    • This article considers the social dimension of human-machine interaction (HMI), specifically in the manufacturing industry’s robotic systems. In particular, the article asserts that “intuitive” HMI should be considered a significant object of technical progress. The authors argue for increased attention towards the social—in addition to the technical—considerations of HMI, including examining the degree of trust that humans have in robots, and whether robots improve working conditions while increasing productivity.
  • Moradi, P. (2019). Race, Ethnicity, and the Future of [Doctoral dissertation, Cornell University]. https://files.osf.io/v1/resources/e37cu/providers/osfstorage/5ca258dcecd788001998c0ac?action=download&version=2&direct&format=pdf
    • This study analyzes how occupational automation corresponds with racial and ethnic demographics. The paper finds that throughout American industrialization, non-White and immigrant workers shifted to low-wage, unskilled work because of the political and social limitations imposed upon these groups. While White workers are more heavily affected by automatability than other racial groups, the proportion of White workers in an occupation is negatively correlated with an occupation’s automatability. The paper offers a susceptibility-based approach to predicting employment outcomes from AI-driven automation.
  • Polanyi, M. (2009).* The tacit dimension. University of Chicago Press.
    • This book argues that tacit knowledge—tradition, inherited practices, implied values, and prejudgments—is a crucial part of scientific knowledge. This book challenges the assumption that skepticism, rather than established belief, lies at the core of scientific discovery. It concludes that all knowledge is personal, with the indispensable participation of the thinking being, and that even the so-called explicit knowing (or formal, or specifiable knowledge) is always based on personal mechanisms of tacit knowing.
  • Rogers, B. (2020).* The Law & Political Economy of Workplace Technological Change. Harvard Civil Rights-Civil Liberties Law Review, 55. http://dx.doi.org/10.2139/ssrn.3327608
    • This paper makes the case that automation is not a major threat to most jobs today, nor will it be in the near future. However, it points out that existing labour laws allow companies to leverage new technology to control workers, such as through enhanced monitoring. It argues that policymakers must expand the scope and stringency of companies’ duties toward their workers, or rewrite policies in ways that enable workers to push back against the introduction of new workplace technologies.
  • Rosenblat, A., et al. (2017). Discriminating tastes: Uber’s customer ratings as vehicles for workplace discrimination. Policy & Internet9(3), 256-279. https://doi.org/10.1002/poi3.153
    • This paper analyzes the Uber platform as a case study to explore how bias may creep into evaluations of drivers through consumer‐sourced rating systems, and draws on social science research to demonstrate how such bias emerges in other types of rating and evaluation systems. The paper argues that while companies are legally prohibited from making employment decisions based on certain characteristics of workers (e.g. race), their reliance on potentially biased consumer ratings to make material determinations may nonetheless lead to a disparate impact in employment outcomes. 
  • Schneider, D., & Harknett, K. (2016). Schedule instability and unpredictability and worker and Ffmily health and wellbeing. Washington Center for Equitable Growth Working Paper Series. http://cdn.equitablegrowth.org/wp-content/uploads/2016/09/12135618/091216-WP-Schedule-instability-and-unpredictability.pdf
    • This paper describes an innovative approach to survey data collection from service sector workers that allows for the collection of previously unavailable data on scheduling practices, health and wellbeing. They then use this data to show that exposure to unstable and unpredictable scheduling practices is negatively associated with household financial security, worker health, and parenting practices. 
  • Thomas, R. J. (1994). What machines can’t do: Politics and technology in the industrial enterprise. University of California Press.
    • This book explores the social and political dynamics that are an integral part of production technology through conducting over 300 interviews inside four successful manufacturing enterprises, from top corporate executives to engineers to workers and union representatives. The author urges managers to not put blind hopes into smarter machines but to find smarter ways to organize people, and argues against the popular idea that smart machines alone will lead to advancement. 
  • Tippett, E., et al. (2017). When timekeeping software undermines compliance. Yale Journal of Law and Technology19(1), 1-76. 
    • This article examines 13 commonly used electronic timekeeping programs to expose the ways in which it can erode wage law compliance. Drawing on insights from the field of behavioral compliance, the authors explain how the software presents subtle cues that can encourage and legitimize wage theft by employers. The article examines gaps in legislation that have created a regulatory vacuum in which timekeeping software has developed, and reforms to encourage wage law compliance across workplaces.

Chapter 15. AI as a Moral Right-Holder (John Basl and Joseph Bowen)⬆︎

  • Andreotta, A. J. (2021). The hard problem of AI rights. AI & Society, 36(1), 19–32. https://doi.org/10.1007/s00146-020-00997-x
    • This paper approaches the setting of the “hard problem” (or alternatively, the hard question) of consciousness: why do certain brain states give rise to experience? Within the setting of this question, the author considers three ways (superintelligence, empathy, and a capacity for consciousness) that claims in favor of AI rights can be grounded. Arguing for consciousness as a central focus, the author draws a distinction between consciousness in the context of animal rights cases and AI rights cases, stating that one cannot be conclusively categorized in terms of the other. The author suggests that if humans do not come to understand how consciousness arises, humans may inadvertently create creatures that are conscious and cause them to suffer without realizing.
  • Baertschi, B. (2012). The moral status of artificial life. Environmental Values, 21(1), 5–18. http://www.jstor.org/stable/23240349
    • This paper asserts that an entity’s status as “natural” or “artificial” in the genetic sense does not have an impact on its moral status. The author states that if two living beings with moral status are similar, but have been produced differently, their moral status is identical, except if the way they have been produced changes their intrinsic properties. This paper discusses reasons for the confusion of category that interprets the distinction between “natural” and “artificial” as an ontological distinction (with moral consequences), even as it would be more appropriate, the author states, to understand it as a moral distinction (with no ontological consequences).
  • Basl, J. (2013). The ethics of creating artificial consciousness. APA Newsletter on Philosophy and Computers, 13(1), 23–29. https://philarchive.org/archive/BASTEO-11
    • This essay notes that research aiming to create artificial entities with conscious states might be unethical because it wrongs, or will likely wrong, its subjects. If the subjects of artificial consciousness research end up possessing conscious states, then they are research subjects in the way that sentient non-human animals and human beings are research subjects. As a result, such artificially conscious research subjects should be afforded certain protections.
  • Basl, J. (2014). Machines as moral patients we shouldn’t care about (yet): The interests and welfare of current machines. Philosophy & Technology, 27(1), 79–96. https://doi.org/10.1007/s13347-013-0122-y
    • Situating a discussion of moral status within Interest Theory, this paper considers the potential future moral patiency status of artificial consciousnesses. Distinguishing systems exhibiting teleological interests and goal pursuits (such as biological and environmental systems) from those exhibiting the psychological interests associated with moral patiency, this paper asserts that machines are not yet moral patients. Offering a brief survey of both epistemic and moral questions that researchers currently encounter, the author asserts that if artificial consciousnesses come to exist that have the capacity for attitudes commensurate with psychological interests, these artificial consciousnesses could have psychological interests that ground their status as moral patients.
  • Basl, J. (2014). What to do about artificial consciousness. In R. L. Sandler (Ed.), Ethics and emerging technologies (pp. 380–392). Palgrave Macmillan.
    • This chapter defends an account of moral status according to which the moral status of an entity is determined by its capacities. For example, if an intelligent machine possesses cognitive and psychological capacities akin to those of humans, such entities should be accorded comparable moral status. Nevertheless, the author argues that it is unlikely that machines will possess cognitive and psychological capacities akin to those of humans. Even if they do, the author asserts that it will be difficult for humans to discern whether such capacities and interests are present in a non-human entity.
  • Basl, J. (2019).* The death of the ethic of life. Oxford University Press.
    • The ethic of life states that all living things deserve some degree of moral concern, i.e., that moral status is assigned upon evidence of sentience. However, if it is argued that the well-being of non-sentient beings is morally significant insofar (and perhaps only insofar) as it matters to sentient beings, then an ethic of life’s sentience criterion fails to capture how the moral significance of artifacts differs from that of organisms.  
  • Basl, J., & Sandler, R. (2013). The good of non-sentient entities: Organisms, artifacts, and synthetic biology. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences44(4), 697-705.  https://doi.org/10.1016/j.shpsc.2013.05.017
    • This paper examines whether or not synthetic organisms have a good of their own and, consequently, are themselves deserving of moral consideration. Appealing to an account of teleology that explains the good of non-sentient organisms, the authors argue that synthetic organisms also have a good of their own that is grounded in their teleological organization. Such a rationale, however, introduces the consequence of traditional artifacts arguably also having a good of their own.
  • Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209–221. https://doi.org/10.1007/s10676-010-9235-5
    • Asserting the need for an alternative approach to moral consideration that can shape relations between humans and intelligent robots, this paper surveys a number of conceptual avenues that could recognize and even respect a setting of systemhood as well as subjecthood. This paper’s social-relational approach rejects the idea of fixed criteria for moral status; this paper also rejects the idea that a robot or other artificial entity must carry a permanent sort of “moral backpack” to be deserving of recognition. Rather, the entity’s dynamic and evolving relations might instead be assessed in a temporal and situational context. Further, a combined approach that draws from settings of both systemhood and subjecthood could be engaged.
  • Coman, A., & Aha, D.W. (2018). AI Rebel Agents. AI Magazine, 39(3), 16–26. https://doi.org/10.1609/aimag.v39i3.2762
    • Asserting that the capacity to say “no” to a request is an essential part of being sociocognitively human, the authors argue that it is beneficial for certain AI agents to rebel for positive, defensible, and allegedly “moral” reasons. Suggesting that AI may never become socially intelligent absent such contextual noncompliance, the authors present a phased framework that situates the “rebel agent” terminologically, narratively and systematically, enabling an examination of positive and negative roles that the noncompliant agent could assume.
  • Cruft, R. (2013).* XI—Why is it disrespectful to violate rights? Proceedings of the Aristotelian Society, 113(2), 201–224. https://doi.org/10.1111/j.1467-9264.2013.00352.x
    • Directed duties are duties that are owed to a particular person or group. This paper considers the manner in which directed duties are related to respect. It also works to make sense of the fact that directed duties are often justified independently of whether or not they do anything for those to whom the duties are owed. 
  • Danaher, J. (2020). Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics, 26(4), 2023–2049.  https://doi.org/10.1007/s11948-019-00119-x
    • This paper proposes a theory of ethical behaviorism, according to which robots can possess significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. The author argues that this performative threshold may not exceed the reach of robots and, if robots have not done so already, they may cross the threshold in the future. The paper proposes a principle of procreative beneficence that governs the decision to create robots that possess moral status.
  • Gilbert, M. & Martin, D. (2021). In search of the moral status of AI: Why sentience is a strong argument. AI & Society. Advance online publication. https://doi.org/10.1007/s00146-021-01179-z
    • This paper considers different arguments for granting moral status to an artificial intelligence (AI) system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. Leaving the idea of indirect duties aside, since such duties do not imply considering an AI system for its own sake, the authors reject both the relational argument and the argument from intelligence. Acknowledging that the argument from life may work in a weak sense, the authors point to sentience as a stronger argument for grounding the moral status of an AI system. This determination draws upon the Aristotelian principle of equality, which states that what is identical should be treated identically. However, this claim of sameness relies upon technological development that has not yet been realized.
  • Goodwin, G. P. (2015) Experimental approaches to moral standing. Philosophy Compass10, 914–926. https://doi.org/10.1111/phc3.12266
    • This paper argues that understanding the factors which underlie moral status attribution is important, as they indicate how broadly (or narrowly) individuals conceptualize the moral world and how various entities, both human and non-human, should be treated. This paper examines a series of studies conducted by both psychologists and philosophers that have revealed three main drivers of moral standing: the capacity to suffer (psychological patiency), intelligence or autonomy (agency), and the nature of an entity’s disposition (whether it is harmful). These studies have also revealed causal links between moral standing and other variables of interest, namely mental state attributions and moral behavior.
  • Gordon, J.-S. (2020). Artificial moral and legal personhood. AI & Society. Advance online publication. https://doi.org/10.1007/s00146-020-01063-2
    • This paper responds to European Parliament’s resolution on Civil Law Rules on Robotics (2017) and its recommendation that robots be granted legal status and electronic personhood. The author argues that moral and legal personhood should not be granted to currently existing robots, given their technological limitations and their failure to meet the morally relevant criteria (rationality, autonomy, understanding, and having social relations) necessary to have moral rights bestowed upon them. The paper examines two analogies that have been proposed, the first between robots and corporations (which are treated as legal persons) and the second between them and animals. The paper states that one should consider attribution of moral personhood to robots only once robots have achieved certain capacities comparable to humans.
  • Gordon, J-S. (2020). What do we owe to intelligent robots? AI & Society, 35, 209–223. https://doi.org/10.1007/s00146-018-0844-6
    • This paper focuses upon whether highly advanced artificially intelligent entities will deserve moral rights once they become capable of moral reasoning and decision-making. The author argues that humans are obligated to grant moral rights to such entities once they have become full ethical agents, i.e., subjects of morality. The author presents four related arguments in support of this claim, and thereafter examines four main objections to this claim. The author further states that given their ever-increasing involvement in many sensitive fields and their increasing social interaction with humans, it is important that “intelligent robots” learn how to make moral decisions and act according to these decisions.
  • Griffin, J. (1986).* Well-being: Its meaning, measurement and moral importance. Clarendon Press.
    • Enumerating an overlapping set of prudential values that combine to produce a sort of well-being that constitutes human flourishing, this book approaches well-being in terms of action towards fulfillment of informed desires. Loosening the delineations easily afforded by dichotomies of “objective” and “subjective,” the book takes a pluralistic view of notions of utility so as to resist merely psychological renderings of the basis of a metric by which to measure well-being.
  • Gunkel, D. J. (2014). A vindication of the rights of machines. Philosophy & Technology, 27(1), 113–132. https://doi.org/10.1007/s13347-013-0121-z
    • This paper asserts that questions concerning the “rights of machines” make a general and fundamental claim on ethics, requiring ethics practitioners to rethink the system of moral considerability all the way down. Addressing the insufficiency of exact and exclusive lists of minimal conditions necessary for the status of moral agency or moral patiency, this paper contrasts such lists with Floridi’s information ethics and Levinas’ ethical encounter with the face of the Other. These two alternative lenses do not themselves resolve the issue, but rather emphasize even further how the question of moral standing must be thoroughly reevaluated in the face of the intelligent machine.
  • Gunkel, D. J. (2018). Robot rights. MIT Press.
    • Engaging the still unresolved proposition of whether robots should have rights, the author draws from the philosophy of Levinas to situate human-robot moral encounters in terms of the command of the face of the Other, as that which supervenes upon human selfhood and instantiates unavoidable responsibility. Relationally presented, such an ethical system is contrasted with, e.g., deontological, prior and rule-based assumptions of an ethics of AI; it is important to acknowledge, however, that all approaches mentioned still involve anthropocentric assumptions.
  • Gunkel, D. J. (2018). The other question: Can and should robots have rights? Ethics and Information Technology, 20(2), 87–99. https://doi.org/10.1007/s10676-017-9442-4
    • This paper engages with the question of whether robots should have rights. In doing so, it examines how the terms “can” and “should” figure in discussions surrounding the is-ought problem. The paper turns its attention to the work of Emmanuel Levinas to reformulate the manner in which one asks about moral patiency in the first place. It discusses the view that moral consideration is conferred in the face of actual social relationships and interactions, rather than pre-determined ontological criteria or capability.
  • Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20(4), 291–301. https://doi.org/10.1007/s10676-018-9481-5
    • This paper contends that analogies between humanoid robots and animals do not provide a useful method of understanding the nature of robots; responsible discourse concerning the nature of robots should therefore be cautious in its appeal to analogies with animals. The paper discusses how such analogical framing can mislead efforts to understand the moral status of humanoid robots and notions of potential legal liability associated with them.
  • Kramer, M. H. (2001).* Getting rights right. In M. H. Kramer (Ed.), Rights, wrongs and responsibilities (pp. 28–95). Palgrave Macmillan.
    • This essay aims to clarify and develop the basic claims of the Interest Theory and the Will Theory, placing preference upon the former. The Interest Theory holds that the essence of a right consists in the normative protection of some aspect(s) of the right-holder’s well-being. In contrast, the Will Theory claims that the essence of a right consists in the right holder’s opportunities to make normatively significant choices relating to the behavior of others.
  • McGinn, C. (1999).* The mysterious flame. Basic Books.
    • Confronting the limits of both materialist and dualist versions of the mind-brain problem, this book argues that a radically different approach would be required to understand the nature of and rationale for consciousness, and more intriguingly of self-consciousness. Asserting that the mind-brain question cannot be answered by humans due to the way that human minds are constructed, the author considers how one might arrive at this view, approaching, e.g., how reconceiving of the human understanding of space (via, theoretically, modes of genetic engineering) might enable aspects of such an understanding.
  • Miller, L. F. (2015). Granting automata human rights: Challenge to a basis of full-rights privilege. Human Rights Review, 16(4), 369–391. https://doi.org/10.1007/s12142-015-0387-x
    • This paper examines whether or not human beings are morally required to extend full human rights to humanlike automata. In examining this issue, the paper reflects on the ontological difference between human beings and automata, namely, that automata have a constructor and a given purpose. The author argues that human beings need not be under any moral obligation to confer full human rights to automata.
  • Mosakas, K. (2020). On the moral status of social robots: Considering the consciousness criterion. AI & Society. Advance online publication. https://doi.org/10.1007/s00146-020-01002-1
    • This paper outlines the consciousness criterion for moral status. It considers three prominent approaches to moral consideration that have been used to justify the claim that direct moral duties are owed to social robots. The author concludes that none of these approaches surpass a standard properties-based view that presupposes the consciousness criterion. The author argues that social robots should not be regarded as proper objects of moral concern unless, and until, they become capable of having conscious experience. While this does not entail that they should be excluded from human moral reasoning and decision-making altogether, it does suggest the implausibility of the assumption that humans owe direct moral duties to entities such as social robots.
  • Neely, E. L. (2014). Machines and the moral community. Philosophy & Technology, 27(1), 97–111. https://doi.org/10.1007/s13347-013-0114-y
    • Arguing that the sentience criterion for moral standing is insufficient to cover all humans, this paper argues that the sentience criterion is likewise insufficient as a rationale to deny moral standing to a non-human entity. Stating that there are several ways that an entity may have interests, this paper presents an interest-based account for determining an entity’s moral status. If an entity has interests, and may thereby be harmed or benefited, the author urges moral generosity when considering the moral claims of machines and the recognition of moral claims of those who (or those that) are physically unlike humans.
  • Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers.
    • This book examines emerging ethical issues concerning human beings, robots, and agency. In its discussion of robot rights, it is argued that it can sometimes make sense to treat robots with some degree of moral consideration; for instance, in cases where robots look and act like human or non-human animals. Nevertheless, robots are not themselves deserving of direct duties until they develop a human- or animal-like inner life.
  • Raz, J. (1986).* The morality of freedom. Clarendon Press.
    • Discussing the nature of freedom and authority, this book argues that a concern with autonomy underlies the value of freedom, and the rights and choices that freedom allows to be realized actively. Autonomy becomes actively realized only if the subject is situated so as to have an array of valid and available options from which to choose. Thus, against conventionally liberal positions, the book argues that political and societal morality is neither rights‐based, nor equality‐based, but is instead driven by the interaction between structures and social forms of authority, and the requirements of individual autonomy.
  • Scheessele, M. (2018). A framework for grounding the moral status of intelligent machines. In J. Furman, G. Marchant, H. Price, & F. Rossi (Eds.), AIES ’18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 251-256). Association for Computing Machinery. https://doi.org/10.1145/3278721.3278743
    • This paper proposes that the moral status of current and foreseeable intelligent machines might draw from the status accorded to environmental entities (such as plants and trees) that are likewise teleologically-directed. This paper’s analysis grounds its propositions upon a network or system’s possession of a functional (as opposed to actual) morality or moral agency. The author asserts a hierarchy in which the limits of obligations to intelligent machines, thus categorized, would fall short of human obligations to entities that are recognized as sentient.
  • Schwitzgebel, E., & Garza, M. (2015). A defense of the rights of artificial intelligences. Midwest Studies in Philosophy, 39(1), 98–119. https://doi.org/10.1111/misp.12032
    • This paper provides a positive argument for the rights of artificially intelligent entities. Two principles of ethical AI design are offered; namely, (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. The paper also argues that human beings would probably owe more moral consideration to human-grade artificial intelligences than is owed to human strangers.
  • Sebo, J. (2017). Agency and moral status. Journal of Moral Philosophy14(1), 1–22. https://doi.org /10.1163/17455243-46810046
    • Stating that recent developments in philosophy and psychology have clarified the need for more than one conception of agency, this paper presents a distinction between perceptual agency and propositional agency. The author argues that many nonhuman animals are perceptual agents and that many humans are agents of both kinds. The author goes on to assert that insofar as human and nonhuman animals exercise the same kind of agency, they have the same kind of moral status, and explores some of the moral implications of this idea. For example, what legal or political rights might humans or nonhumans have or lack, insofar as each acts perceptually?
  • Shepherd, J. (2021, forthcoming). The moral status of conscious subjects. In S. Clarke, H. Zohny, & J. Savulescu (Eds.), Rethinking Moral Status. Oxford University Press.
    • Offering an account of phenomenal value that focuses upon the structure of phenomenally conscious states at specific times and over time, this paper discusses the need for a theory of the grounds of moral status that could guide practical considerations regarding how to treat a wide range of potentially conscious entities, e.g., injured humans, cerebral organoids, chimeras, artificially intelligent machines, and non-human animals. The author states that this theory of moral status needs to be mapped onto practical considerations to clarify how both phenomenal value and moral status may vary across different entity types.
  • Smith, B. C. (2019). The promise of artificial intelligence: Reckoning and judgment. The MIT Press.
    • Defining a distinction between “reckoning” and “judgment,” Smith presents a fundamental difference — not of degree, but of kind—between human and machine intelligences. Unpacking the notion of intelligence itself, Smith examines the history of AI from its first-wave origins to recent advances in machine learning. Warning that superlative machine achievements in calculative reckoning do not translate to ethical and responsible judgment, Smith challenges the capability of machines to be moral rights holders. Delineating human and machine roles, Smith suggests that the development of superior machine reckoning has powerful implications, ones that impute less for the machine’s moral status than for near future human decision making.
  • Sullins, J. (2006). When is a robot a moral agent? International Review of Information Ethics6(12), 23–30. https://informationethics.ca/index.php/irie/article/view/136
    • This paper argues that robots can be seen as moral agents in certain circumstances. Drawing a distinction between the categories of “person” and “moral agent,” the author asserts that robots are moral agents when and if there is a reasonable level of abstraction under which the machine has autonomous intentions and responsibilities. If the robot can be seen as autonomous from many points of view, then, the author states, the machine is to be viewed as a robust moral agent. This implies that highly complex interactive robots of the future will be moral agents with corresponding rights and responsibilities. However, even the modest robots of today can be seen to be moral agents of a sort, under certain, but not all, levels of abstraction.
  • Sumner, L. W. (1996).* Welfare, happiness, and ethics. Clarendon Press.
    • This book presents an original theory of welfare which closely connects welfare with happiness or life satisfaction. It provides a defence of welfarism, which argues that welfare is the only basic ethical value. That is, welfare is the only thing for which one has a moral reason to promote for its own sake.
  • Tannenbaum, J.& Jaworska, A. (2021). The Grounds of Moral Status. In Edward N. Zalta (Ed.), Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2021/entries/grounds-moral-status
    • This entry in the Stanford Encyclopedia of Philosophy offers an overview and bibliography on the titular topic. This entry was substantially updated in March 2021 to reflect current scholarship.
  • Tavani, H. T. (2018). Can social robots qualify for moral consideration? Reframing the question about robot rights. Information, 9(4), 1–16. https://doi.org/10.3390/info9040073
    • This paper contends that the question of whether or not robots deserve rights needs to be reframed and refined, asking instead whether or not social robots qualify for moral consideration as moral patients. Social robots are understood as physically embodied robots that are socially intelligent and interact with humans in a similar manner to the way humans interact with one another. The paper appeals to the work of Hans Jonas in arguing for the conclusion that social robots are moral patients and, consequently, deserve moral consideration.
  • Thomson, J. J. (1990).* The realm of rights. Harvard University Press.
    • Distinguishing the idea of an individual possessing a right, duty, or claim from the idea of what ought to be done in the world, this book asserts that rights hold an independent status in the moral realm. According significant attention to the moral status of claims within this discussion, this book addresses, among other angles, the ability to forfeit claims, and when it is or is not permissible to prevent infringement upon a right or claim of another.
  • Wetlesen, J. (1999.) The moral status of beings who are not persons: A casuistic argument. Environmental Values, 8(3), 287–323. https://doi.org/10.3197/096327199129341842
    • Asking who or what can have a moral status, in the sense of humans having direct moral duties to them, this paper argues for a biocentric position that ascribes inherent moral status value to all individual living organisms. This position, the author states, must be defended against an anthropocentric position. The author presents an argument for equal moral status value for moral persons and agents, and gradual moral status value for nonpersons, according to their degree of similarity with moral persons. The argument is constructed as a casuistic argument, proceeding by analogical extension from persons to nonpersons. 

Chapter 16. Could You Merge with AI? Reflections on the Singularity and Radical Brain Enhancement (Cody Turner and Susan Schneider)⬆︎

  • Biocca, F. (1996). Intelligence augmentation: The vision inside virtual reality. In B. Gorayska & J. L. Mey (Eds.), Cognitive technology: In search of a humane interface (pp. 59–75). Elsevier Science. https://doi.org/10.1016/S0166-4115(96)80023-9
    • This chapter considers the nature of reality itself as virtual reality simulations become increasingly realistic and immersive. The authors go beyond the obvious sensory augmentation that comes with virtual reality and explore how virtual environments can augment cognition by facilitating the projection of complex ideas onto a visible medium. The outsourcing and expansion of one’s imagination is presented as an amplification of their cognitive abilities.
  • Bostrom, N. & Roache, R. (2007).* Ethical issues in human enhancement. In T. S. Petersen, J. Ryberg & C. Wolf (Eds.), New waves in applied ethics (pp. 120-152). Palgrave Macmillan.
    • A survey of issues in human enhancement ethics. Schneider and Turner highlight the authors’ coverage of the therapy / enhancement distinction. As the authors point out, this distinction is often ambiguous, and some thinkers reject it altogether.
  • Bostrom, N. & Roache, R. (2011).* Smart policy: Cognitive enhancement and the public interest. In J. Savulescu, R. T. Meulen & G. Kahane (Eds.), Enhancing human capacities (pp. 138-152). Wiley-Blackwell.
    • This paper discusses the nature and ethics of cognitive enhancement. The authors address several related policy issues, including drug approval criteria, research funding, and regulation of access.
  • Bostrom, N. (2014).* Superintelligence: Path, dangers, strategies. Oxford University Press.
    • This book covers the history of artificial intelligence, paths to superintelligence, and forms the latter may take, including brain-computer interfaces. Bostrom then considers the prospect of an intelligence explosion, and several challenges posed by the control problem.
  • Buchanan, A. (2011). Beyond humanity? The ethics of biomedical enhancement. Oxford University Press.
    • This book addresses a number of issues in the context of human enhancement, including the therapy / enhancement distinction, human development, character concerns, human nature, conservatism, unintended bad consequences, moral status, and distributive justice. The author offers a general outlook that is, if not pro-enhancement, then anti-anti-enhancement.
  • Chalmers, D. J. (2016).* The singularity: A philosophical analysis. In S. Schneider (Ed.), Science fiction and philosophy: From time travel to superintelligence (2nd ed., pp. 171-224). Wiley-Blackwell.
    • This paper offers a comprehensive study of the singularity. The author explains the logic behind the singularity, as well as how it may be promoted – or not. He then discusses mind-uploading and personal identity, in the context of surviving in a post-singularity world.
  • Clark, A., & Chalmers, D. J. (1998).* The extended mind. Analysis, 58(1), 7-19. http://dx.doi.org/10.1093/analys/58.1.7
    • The authors’ extended mind hypothesis suggests they play an active role in our mental processes, which has implications for how such devices are conceptualized as wrapped up in our very identities.
  • Fukuyama, F. (2002). Our posthuman future: Consequences of the biotechnology revolution. Picador.
    • This book contributes to the discussion of human enhancement ethics. The author argues that transhumanism is the world’s most dangerous idea, because tampering with human nature threatens to undermine the basis for human dignity and rights. This book reflects on the future of biotechnology, and how it might be regulated.
  • Gleiser, M. (2015). Welcome to your transhuman self. In J. Brockman (Ed.), What to think about machines that think: Today’s leading thinkers on the age of machine intelligence (pp. 54-55). Harper Perennial.
    • This paper reflects on the human-machine integration scenario. The author points out that this process of cyborgization is already underway, with cell phones and social media existing along the same spectrum as mechanical limbs and brain implants.
  • Hume, D. (1985). A treatise of human nature (E. C. Mossner, Ed.). Penguin Classics.
    • This book is notable for its chapter on personal identity. Hume expresses a skeptical view of personal identity, or the self, now known as bundle theory. Essentially, humans are collections of impressions, constantly in flux. There is no ‘I’ over and above these impressions which can be said to possess them. 
  • Kagan, S. (2012). Death. Yale University Press.
    • This book is a survey of philosophical issues related to death, including, for our purposes, personal identity, and different criteria thereof, such as the soul, body, and mind. Kagan himself endorses the body criterion, but believes persistence of personality is what matters in survival. This distinction, due to Parfit (cited below), has interesting implications for some of the scenarios explored by Schneider and Turner. With mind-uploading, for example, it may be the case that one dies, but this does not matter.
  • Karaman, F. (2021). Ethical issues in transhumanism. In Research Anthology on Emerging Technologies and Ethical Implications in Human Enhancement (pp. 122-139). IGI Global.
    • This chapter argues that transhumanism is unavoidable because technology has greater control over society than society over technology. It advocates for allocating academic resources to the preemptive discussion of the issues that society will be faced with once transhumanism appears. 
  • Kurzweil, R. (2005).* The singularity is near: When humans transcend biology. Viking.
    • Elaborates on exponential growth in science and technology, with a focus on the intersection of genetics, robotics, and nanotechnology. Kurzweil then anticipates how it will transform the human body, brain, and, more generally, our very way of life, on up to the mind-uploading scenario.
  • Locke, J. (1997).* An essay concerning human understanding (R. Woolhouse, Ed.). Penguin Classics.
    • Notable here for its chapter on personal identity. Locke presents a number of original thought experiments designed to test our intuitions about what we really are. He ultimately defends a psychological criterion of personal identity; in particular, psychological connectedness, with an emphasis on memory.
  • More, M., & Vita-More, N. (Eds.). (2013). The transhumanist reader: Classical and contemporary essays on the science, technology, and philosophy of the human future. Wiley-Blackwell.
    • This book covers a broad set of topics pertaining to transhumanism including the intelligent filtering of information, enhanced reality, and mind uploading. Additionally, it examines the technologies required to achieve these goals. The book ends with a discussion about whether transhuman enhancement should be a right, and the dangers it brings to the human species. 
  • Musk, E. (2019). An integrated brain-machine interface platform with thousands of channels. Journal of Medical Internet Research, 21(10), e16194. https://doi.org/10.2196/16194
    • This paper accompanies the creation of high-bandwidth brain machine interfaces (BMIs) by the company Neuralink. These devices serve as research platforms in rodents with the ultimate goal of being fully implantable in humans. Its highest priority applications are restoring motor functions to those with spinal cord injuries, and other immediate therapeutic uses, but augmenting human mental abilities with machine intelligence is also a possible avenue.
  • Nagel, T. (1979). Mortal questions. Cambridge University Press.
    • This book contains the classic essay, “What is it like to be a bat?”. In this paper, Nagel makes sense of the latter as ‘what it is like-ness’ and reflects on the hard problem of consciousness. 
  • Nietzsche, F. (2013). On the genealogy of morals: A polemic (M. A. Scarpitti, Trans.). Penguin Classics. 
    • Nieztsche provides another take on the view that the self is an illusion, or grammatical fiction. There are actions, but no agents.
  • Raisamo, R., et al. (2019). Human augmentation: Past, present and future. International Journal of Human-Computer Studies, 131, 131-143.
    • This paper reflects on how humans have augmented their abilities historically through basic technology such as eyeglasses to substances such as caffeine. It reflects on how the definition of human is slowly changing over time with respect to the first populations of the species. The authors specify three aspects of human experience that are augmentable: senses, action, and cognition. The paper discusses concerns about augmentation such as privacy, safety, and accessibility, as those without access to advanced augmentation will be at a significant disadvantage. 
  • Schneider, S. (2008). Future minds: Transhumanism, cognitive enhancement and the nature of persons. Neuroethics Publications. https://repository.upenn.edu/neuroethics_pubs/37/
    • This paper examines the philosophical implications of transhuman enhancements. Namely, if the person/entity at the end of the process is significantly different from the original person, do they qualify for the same rights and treatment? The discussion regarding how to treat intelligent agents in general is of great ethical concern as humanity gets closer to creating strong artificial intelligence capable of feeling emotions, and the same holds true for future cyborgs.
  • Sorgner, S. L. (2009). Nietzsche, the overhuman, and transhumanism. Journal of Evolution and Technology, 20(1), 29-42.
    • This paper claims that there are more similarities than initially recognized between the posthuman created by transhumanism and the overhuman concept introduced by Nietzsche. The author discusses how the overhuman is the result of the human aspiration for self-improvement, and the desire to overcome one’s limitations. In many ways, the posthuman that results from transhumanism satisfies these roles.
  • Shi, Z., et al. (2016). Brain-machine collaboration for cyborg intelligence. In Z. Shi, S. Vadera, & G. Li (Eds.), International Conference on Intelligent Information Processing (pp. 256-266). Springer.
    • This paper considers collaboration between human and machine intelligence in a cyborg system based on two paradigms: environment awareness and motivation. It focuses on the latter, claiming that motivation is the cause of action and is important for collaboration. The authors offer an algorithm for structuring human and machine interactions based on recent state of the art machine learning methods. 
  • Wu, Z., et al. (2016). Cyborg intelligence: Recent progress and future directions. IEEE Intelligent Systems, 31(6), 44-50.
    • This paper gives practical advice regarding how to integrate human and machine intelligence by proposing frameworks to decode neural signals. It covers multimodal sensory integration, the cognitive cooperation and awareness of goals between the human mind and machine elements required for effective collaboration, and recent applications of brain signal decoding such as hand gesture recognition. 

Chapter 17. Are Sentient AIs Persons? (Mark Kingwell)⬆︎

  • Anderson, S. L. (2016). Asimov’s “Three Laws of Robotics” and machine metaethics. In S. Schneider (Ed.), Science fiction and philosophy: From time travel to superintelligence (2nd ed., pp. 290-307). Wiley-Blackwell.
    • This chapter argues that treating intelligent robots like slaves would be misguided. Such entities could, in principle, follow and advise on ethical principles better than most humans, and even warrant consideration for moral standing, or rights.
  • Ashrafian, H. (2017). Can artificial intelligences suffer from mental illness? A philosophical matter to consider. Science and Engineering Ethics, 23(2), 403-412. https://doi.org/10.1007/s11948-016-9783-0
    • The author of this paper suggests that AI has the capacity to achieve consciousness, sentience and rationality. Given this potential, the author argues that it is important to also consider what ‘mental disorders’ machines might suffer. 
  • Basl, J. & Sandler, R. (2013). The good of non-sentient entities: Organisms, artifacts, and synthetic biology. Studies in History and Philosophy of Biological and Biomedical Sciences, 44(4), 697-705. http://dx.doi.org/10.1016/j.shpsc.2013.05.017
    • Basl and Sandler employ an etiological account of teleology to demonstrate that certain non-sentient entities can have a good. The authors do not mean for this notion of ‘good’ to be understood in a morally loaded sense, although it may contribute to the project of machine ethics with a teleological basis, and lay the groundwork for a broader conception of moral standing, or rights.
  • Bentham, J. (2018). An introduction to the principles of morals and legislation. Forgotten Books. 
    • This is the first major work on classical utilitarianism, which provides one lens through which  the ethical status of machines may be viewed. Bentham lays out the principle of utility, famously dismissing the idea of natural rights as ‘nonsense upon stilts.’ For him, moral standing is conferred not by the capacity to think or speak but suffer.
  • Bergenfalk, J. (2019). AI and human rights: An explorative analysis of upcoming challenges. Human Rights Studies. http://lup.lub.lu.se/student-papers/record/8966323
    • This paper explores the challenges that AI systems present for current human right definitions, focusing on 4 topics: consciousness, rights and agency, bias and discrimination, and socio-economic rights. The author argues that current guidelines are inadequate to accommodate the changes brought by AI, particularly those related to efficiency and human imperfection. 
  • Birhane, A., & van Dijk, J. (2020). Robot rights? Let’s talk about human welfare instead. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 207-213). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375855
    • The authors of this paper argue that robots are artifacts that function as mediators of human beings, and therefore should not be granted rights. Instead, the authors believe the current debate on ‘robot rights’ should focus on how less privileged communities can be exploited by machines, and the effect of this phenomenon on overall human-welfare.
  • Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.
    • Chalmers defends the view that consciousness is irreducibly subjective. Of particular interest here, he supports the possibility of artificial general intelligence, taking on Searle’s Chinese room argument, among other objections.
  • Deutsch, D. (2019). Beyond reward and punishment. In J. Brockman (Ed.), Possible minds: 25 ways of looking at AI (pp. 113-124). Penguin Press.
    • Deutsch argues that certain misconceptions about human thinking have led to misconceptions about machine thinking. He demonstrates the inadequacy of Bayesian updating approaches to artificial general intelligence, and the need to better understand creativity. Artificial general intelligence – machines with no specifiable functionality – is achievable, however, and such entities would be persons.
  • Dick, P. K. (1968).* Do androids dream of electric sheep? Doubleday.
    • A science fiction classic, following one bounty hunter’s pursuit of runaway androids. The novel raises philosophical issues, such as the possibility of empathic machines. 
  • Dragan, A. (2019). Putting the human into the AI equation. In J. Brockman (Ed.), Possible minds: 25 ways of looking at AI (pp. 134-142). Penguin Press.
    • Highlights the importance of defining human-compatible AI in the context of the coordination problem and the value-alignment problem. Our relationship with intelligent machines should go both ways; that is, robots must model people, and people must model robots – properly.
  • Freud, S. (2003).* The uncanny (D. McLintock, Trans.). Penguin Classics.
    • Contains an essay by Freud of the same title, wherein he analyzes the concept of uncanniness. Freud discusses a number of uncanny motifs, such as the automaton.
  • Gleiser, M. (2015). Welcome to your transhuman self. In J. Brockman (Ed.), What to think about machines that think: Today’s leading thinkers on the age of machine intelligence (pp. 54-55). Harper Perennial.
    • This chapter reflects on the human-machine integration scenario. Gleiser points out that this process of cyborgization is already underway, with cell phones and social media existing along the same spectrum as mechanical limbs and brain implants.
  • Gunkel, D. J. (2019). No brainer: Why consciousness is neither a necessary nor sufficient condition for AI ethics. CEUR Workshop Proceedings, 2287, 9. http://ceur-ws.org/Vol-2287/
    • This paper argues that the question of moral and legal status for AI should be more focused on the extrinsic social relationships, or ‘relational turns,’ instead of intrinsic, ontological properties such as sentience and consciousness.  
  • Harris, J., & Anthis, J. R. (2021). The moral consideration of artificial entities: A literature review. arXiv preprint arXiv:2102.04215
    • This paper contains a literature review of 294 relevant papers on the topic of whether robots deserve right or any form of moral consideration. The authors find that the number of publications on this topic is growing exponentially, and most scholars view artificial entities as potentially warranting moral consideration.
  • Hayward, T. (2005).* Constitutional environmental rights. Oxford University Press.
    • This book makes the case for the human right to an adequate environment. This would be a right to nature, rather than a right of nature. One might consider a similar arrangement for some robots, or artificial intelligence systems, where rights concerning them are conceived as an extension of human rights. We already observe discussion about rights to technology, as with calls for a right to the Internet.
  • Johnson, D. G. & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20, 291-301. https://doi.org/10.1007/s10676-018-9481-5
    • The authors suggest that the analogies sometimes drawn between animals and robots, in relation to how humans might think about interacting with the latter, are misleading. For example, the authors do not believe robots can suffer, which has implications for moral status and rights.
  • Kant, I. (1993).* Grounding for the metaphysics of morals (3rd ed., J. W. Ellington, Trans.). Hackett Publishing Company. (Original work published 1785).
    • A central work in deontological ethics, as well as moral philosophy and rights theory more generally. This work contains arguments for the dignity and sovereignty of all moral agents.
  • Kymlicka, W. (1995).* Multicultural citizenship: A liberal theory of minority rights. Oxford University Press.
    • Liberal theory commonly construes rights as individualistic. Kymlicka argues that this tradition is compatible with a more collective understanding of them. These might concern language rights, group representation, or religious education – not at the level of particular people, but entire identities. Notice that, as with rights attributed to animals, or the environment, this is a case where the bearer of rights is unable to explicitly claim them, which may also apply to some artefacts, and robots.
  • Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press.
    • Korsgaard challenges Kant’s view that our obligations to non-human animals are indirect – say, to cultivate certain morally appropriate sensibilities. All sentient creatures have a good and, in a sense, warrant treatment as ends-in-themselves. Korsgaard’s account suggests how strands of Aristotelian and Kantian thought might imply regard for conscious machines.
  • Lavelle, S. (2020). The machine with a human face: From artificial intelligence to artificial sentience. In S. Dupuy-Chessa & H. Proper (Eds.), International Conference on Advanced Information Systems Engineering (pp. 63-75). Springer. https://doi.org/10.1007/978-3-030-49165-9_6
    • This book chapter argues that given the evolution of AI, the definition of ‘artificial intelligence’ is transforming to resemble ‘artificial sentience.’ However, the author argues that the traditional ‘Turing Test’ is an insufficient method of measuring this progress, and new tests need to be developed with conditions that can satisfy the concept of a ‘humaniter.’  
  • Lima, G., et al. (2020). Collecting the public perception of AI and robot rights. Proceedings of the ACM on Human-Computer Interaction4(CSCW2), 135. https://doi.org/10.1145/3415206.
    • The authors explore public perception of granting rights to robots, using the findings from a large online experiment with 1270 participants. The authors find that while users are against robot rights, they are supportive of preventing ‘electronic cruelty.’ In addition, the authors find that how AI is presented to users influences how positively they perceive their relationship with AI systems.
  • Locke, J. (1980).* Second treatise of government (C. B. Macpherson, Ed.). Hackett Publishing Company.
    • A canonical source on the social contract and natural rights, which may influence how we think about their application to artificial intelligence. Pivotal in the development of liberal norms, the text defends a basis for personal freedom and private property, as well as ownership of one’s body and labour.
  • Merleau-Ponty, M. (2012). Phenomenology of perception (D. A. Landes, Trans.). Routledge. 
    • A text in the tradition of French existentialism, Merleau-Ponty elaborates on the primacy of perception. His discussion includes the topic of embodied phenomenology, which has influenced subsequent thinking about embodied cognition, and its relevance to artificial intelligence.
  • Mosakas, K. (2020). On the moral status of social robots: Considering the consciousness criterion. AI & Society, 35(4), 1-15. https://doi.org/10.1007/s00146-020-01002-1
    • Mosakas argues that AI do not deserve moral consideration because they lack the ‘consciousness’ criterion. The author defends this argument through a set of definitions for ‘consciousness’ which they believe that AI systems will not achieve. 
  • Nyholm, S., & Smids, J. (2020). Can a robot be a good colleague? Science and Engineering Ethics26(4), 2169–2188. https://doi-org/10.1007/s11948-019-00172-6
    • In this paper, the authors explore the unique ethical implications of the concept of robots working as colleagues, and how this relationship compares to friendships and romantic partnerships with humans. 
  • Pinker, S. (2015). Thinking does not imply subjugating. In J. Brockman (Ed.), What to think about machines that think: Today’s leading thinkers on the age of machine intelligence (pp. 5-8). Harper Perennial.
    • Pinker explains how a naturalistic, computational theory of reason opens the door to thinking machines. However, our fear of this prospect is unfounded, insofar as it stems from the projection of a malevolent, domineering psychology onto the very concept of intelligence.
  • Saavedra-Rivano, N. (2020). Mankind at a crossroads: The future of our relation with AI entities. International Journal of Software Science and Computational Intelligence12(3), 28-37. https://doi.org/10.4018/IJSSCI.2020070103
    • The author examines the impact of artificial sentient systems on mankind and argues that while the short-term prospect may be positive, in the long-term this technology will only benefit the ‘privileged minority’ in becoming ‘superhumans.’ The paper also explores policy measures that can be taken to prevent this from happening. 
  • Robertson, G. (2013).* Crimes against humanity: The struggle for global justice (4th ed.). New Press.
    • Includes numerous examples of contemporary crimes against humanity. Relevant here for the distinction between these and war crimes. 
  • Scanlon, T. M. (1998). What we owe to each other. Belknap Press.
    • Presents a modern form of contractualism. In the first part of the book, Scanlon argues for reasons fundamentalism, as well as against consequentialism and hedonism. In the second part of the book, he provides an account of wrongness as that which one could reasonably reject. Scanlon suggests entities that cannot speak for themselves may nevertheless be accommodated by his system through advocates. Humans could, perhaps, assume the role of trustee to represent the interests of machines.
  • Searle, J. R. (1980).* Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417-457. http://dx.doi.org/10.1017/S0140525X00005756
    • Includes Searle’s Chinese room argument, the upshot of which is that programs run by digital computers cannot be shown to possess understanding, or consciousness. The argument opposes functionalism and computationalism in philosophy of mind, as well as the possibility of artificial general intelligence.
  • Shelley, M. (2013).* Frankenstein; or, The modern prometheus (M. Hindle, Ed.). Penguin Classics. (Original work published 1818).
    • A gothic horror and science fiction classic, Frankenstein depicts a scientist by that same name, who succeeds in creating intelligent life.
  • Singer, P. (2009).* Animal liberation (Updated edition). HarperCollins Publishers.
    • A major contribution to the animal liberation movement. Singer’s argument for the equality of animals rests not on some conception of rights, but a preference utilitarian perspective. Exemplifies the theme of our expanding moral circle, and how it may grow to include conscious machines.
  • Stamos, D. N. (2016). The myth of universal human rights: Its origin, history, and explanation, along with a more humane way. Routledge.
    • Engages in an evolutionary debunking of universal human rights. Stamos develops on the idea that natural selection reveals the category of ‘human’ to be an unstable one. 
  • Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Alfred A. Knopf.
    • Covers the topics of intelligence, goal-directedness, and the future of artificial intelligence. Tegmark proposes a theory of consciousness according to which subjective experience is a matter of information being processed in a particular kind of way. He places this in the context of a broadly utilitarian ethic, which ascribes moral standing to conscious machines.
  • United Nations. (1948, December 10).* Universal declaration of human rights. https://www.un.org/en/universal-declaration-human-rights/
    • A significant 20th century document on the establishment of universal human rights. Its 30 articles were adopted under United Nations Resolution 217 in Paris, on December 10th, 1948. 

Chapter 18. Autonomy (Michael Wheeler)⬆︎

  • Allen, C., et al. (2000).* Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence12(3), 251-261.
    • This paper surveys ethical disputes, the possibility of a ‘moral Turing Test’ is considered and the computational difficulties accompanying the different types of approach are assessed. Human-like performance, which is prone to include immoral actions, may not be acceptable in machines, but moral perfection may be computationally unattainable. The risks posed by autonomous machines ignorantly or deliberately harming people and other sentient beings are great. The development of machines with enough intelligence to assess the effects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of artificially intelligent automata.
  • Arkin, R. C. (2010).* The case for ethical autonomy in unmanned systems. Journal of Military Ethics9(4), 332-341.
    • The underlying thesis of the research in ethical autonomy for lethal autonomous unmanned systems is that they will potentially be capable of performing more ethically on the battlefield than are human soldiers. In this article this hypothesis is supported by ongoing and foreseen technological advances and perhaps equally important by an assessment of the fundamental ability of human war fighters in today’s battlespace. If this goal of better-than-human performance is achieved, even if still imperfect, it can result in a reduction in non-combatant casualties and property damage consistent with adherence to the Laws of War as prescribed in international treaties and conventions and is thus worth pursuing vigorously.
  • Asaro, P. (2008).* How just could a robot war be? In P. Brey, A. Briggle & K. Waelbers (Eds.), Current issues in computing and philosophy (pp. 50-64). Ios Press. 
    • This paper considers the fundamental issues of justice involved in the application of autonomous and semi-autonomous robots in warfare, beginning with an analysis of how robots may fit into the framework of just war theory. It considers how robots, “smart” bombs, and other autonomous technologies might challenge the principles of just war theory, and how international law might be designed to regulate them. It concludes that deep contradictions arise in the principles intended to govern warfare and our intuitions regarding the application of autonomous technologies to war fighting.
  • Awad, E., et al. (2018).* The moral machine experiment. Nature, 563(7729), 59–64.
    • To address the challenge of quantifying societal expectations of ethical principles that should guide machine behaviour, the authors deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. Here, the authors describe the results of this experiment. The paper summarizes global moral preferences; documents individual variations in preferences, based on respondents’ demographics; and reports cross-cultural ethical variation, and uncovering three major clusters of countries. Finally, the authors argue that these differences correlate with modern institutions and deep cultural traits.
  • Boden, M. A. (1996).* Autonomy and artificiality. In M. A. Boden (Ed.) The philosophy of artificial life (pp. 95-107). Oxford University Press.
    • This new volume in the acclaimed Oxford Readings in Philosophy series offers a selection of the most important philosophical work being done in the new and fast-growing interdisciplinary area of artificial life. Artificial life research seeks to synthesize the characteristics of life by artificial means, particularly employing computer technology. The essays here explore such themes as the nature of life, the relation between life and mind, and the limits of technology.
  • Boden, M. A. (2016). AI: Its nature and future. Oxford University Press.
    • This book describes how research in artificial intelligence has provided fruitful results in robotics and theoretical biology and covers the history of the increasingly specialized field of AI, highlighting its successes and looking towards its future. Finally, it argues that AI has been valuable in helping to understand the mental processes of memory, learning and language for living creatures. 
  • Bostrom, N. (2014).* Superintelligence: Paths, dangers, strategies. Oxford University Press.  
    • This book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.
  • Coeckelbergh, M. (2013). Drones, information technology, and distance: Mapping the moral epistemology of remote fighting. Ethics and Information Technology15(2), 87-98.
    • This paper argues that drone fighting, like other long-range fighting, creates epistemic and moral distance in so far as ‘screen fighting’ implies the disappearance of the vulnerable face and body of the opponent and thus removes moral-psychological barriers to killing. However, the paper also argues that this influence is at least weakened by current surveillance technologies, which make possible a kind of ‘empathic bridging’ by which the fighter’s opponent on the ground is re-humanized, re-faced, and re-embodied. The paper asserts that ‘mutation’ or unintended ‘hacking’ of the practice is a problem for drone pilots and for those who order them to kill but revealing its moral-epistemic possibilities opens up new avenues for imagining morally better ways of technology-mediated fighting.
  • Dennett, D. C. (1984).* Elbow room: The varieties of free will worth wanting. MIT Press.
    • This book argues that classical formulations of the free will problem in philosophy depend on misuses of imagination, and the author disentangles the philosophical problems of real interest from the “family of anxieties” they get enmeshed in – imaginary agents, bogeymen, and dire prospects that seem to threaten our freedom. The author examines the problem of how anyone can ever be guilty, and what the rationale is for holding people responsible and even, on occasion, punishing them.
  • Gill, M. L., & Lennox, J. G. (2017). Self-motion: From Aristotle to Newton. Princeton University Press.
    • This book contains a collection of essays on the historical development of the concept of self-motion. The authors’ discussion of the existence of self-movers and the qualities of self-motion includes perspectives from classical, Hellenistic, medieval, and early modern scholars in philosophy and science. The implications of arguments surrounding self-motion are fundamental to many theories on agency and autonomy across relevant disciplines in philosophy, science, technology, law, and society.
  • Gunkel, D. J. (2017). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology. https://doi.org/10.1007/s10676-017-9428-2
    • This essay responds to the question concerning robots and responsibility, by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. The essay considers three instances where recent innovations in robotics challenge this standard operating procedure by opening gaps in the usual way of assigning responsibility. Finally, the essay concludes by evaluating the three different responses—instrumentalism 2.0, machine ethics, and hybrid responsibility—that have been made in face of these difficulties in an effort to map out the opportunities and challenges of and for responsible robotics.
  • Guo, X., et al. (2014). Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. Advances in Neural Information Processing Systems, 4, 3338–3346.
    • This paper introduces an algorithm for playing Atari games. The authors’ approach involves an agent estimating the value of a possible action by running several simulations. The algorithm describes how the agent can efficiently use the results of the simulation to adjust the policy for choosing actions. The Monte-Carlo tree search planning gives an efficient training algorithm allowing the agent to explore and accumulates simulated possible actions. This tree search planning is computationally expensive so is only used in “offline” training these models. Once trained, these programs can play in real-time with only their learned parameters.
  • Heyns, C. (2017).* Autonomous weapons in armed conflict and the right to a dignified life: An African perspective. South African Journal on Human Rights33(1), 46-71.
    • This article argues that the question that will haunt the future debate over autonomous weapons is: What if technology develops to the point where it is clear that fully autonomous weapons surpass human targeting, and can potentially save many lives? Would human rights considerations in such a case not militate for the use of autonomous weapons, instead of against it? This article argues that the rights to life and dignity demand that even under such circumstances, full autonomy in force delivery should not be allowed. The article emphasises the importance placed on the concept of a ‘dignified life’ in the African human rights system.
  • Kang, M. (2011). Sublime dreams of living machines: The automaton in the European imagination. Harvard University Press.
    • This book gives a detailed history of Western thought on automation by examining developments in intellectual, cultural, and artistic expressions of automata. The author argues for a distinction from ancient conceptions of animated objects and outlines the development of mechanistic philosophy. The book describes the influence of automata across disciplines through its appearance in works such as Descartes’ model of biological mechanism and Hobbes’s Leviathan to more modern developments and influences. 
  • Mindell, D. A. (2015).* Our robots, ourselves: Robotics and the myths of autonomy. Penguin.
    • This book argues that the stark lines we’ve drawn between human and not human, manual and automated, are not helpful for understanding our relationship with robotics. The book clarifies misconceptions about the autonomous robot, offering instead a hopeful message about what the author calls “rich human presence” at the center of the technological landscape we are now creating.  
  • Mnih, V., et al. (2015).* Human-level control through deep reinforcement learning. Nature518(7540), 529-533.
    • In this paper, the authors use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. The research demonstrates that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
  • Niker, F., et al. (2018).* Updating ourselves: Synthesizing philosophical and neurobiological perspectives on incorporating new information into our worldview. Neuroethics11(3), 273-282.
    • This paper argues of the importance to theories of autonomous agency of the capacity to appropriately adapt our values and beliefs, in light of relevant experiences and evidence, to changing circumstances. It presents a plausible philosophical account of this process, which is generally applicable to theories about the nature of autonomy, both internalist and externalist alike. The paper then evaluates this account by providing a model for how the incorporation of values might occur in the brain; one that is inspired by recent theoretical and empirical advances in our understanding of the neural processes by which our beliefs are updated by new information. 
  • Lin, P. (2016).* Why ethics matters for autonomous cars. In M. Maurer, C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomous Driving: Technical, Legal and Social Aspects (pp. 69-85). Springer.
    • This chapter explains why ethics matters for autonomous road vehicles, looking at the most urgent area of their programming. The chapter acknowledges that as nearly all of this work is still in front of the industry, the questions raised do not have any definitive answers at such an early stage of the technology.
  • Reynolds, C. (1987). Flocks, herds, and schools: A distributed behavioral model. ACM SIGGRAPH Computer Graphics, 21, 25–34. https://doi.org/10.1145/280811.281008
    • This paper introduces an artificial life program that simulates flocking behaviour in nature. The simulation describes the motion of a collection of “bird-oid objects” called boids. The flocking is an emergent behaviour arising from the interaction of each boid individually following simple rules. Individual boid rules, such as avoiding or moving toward nearby boids, produces complex behaviour in aggregate. The paper describes more complex individual boid rules that can allow flock obstacle avoidance and goal seeking. The findings of this paper can be applied to computer graphics and the creation of realistic flock animations for videogames and movies.
  • Rusu, A. A., et al. (2016).* Progressive neural networks. arXiv preprint arXiv:1606.04671
    • Learning to solve complex sequences of tasks–while both leveraging transfer and avoiding catastrophic forgetting–remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. The paper evaluates this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, the paper asserts that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
  • Santoni de Sio, F., & Van den Hoven, J. (2018).* Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI5(15). https://doi.org/10.3389/frobt.2018.00015
    • This paper lays the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design,” the paper’s account of meaningful human control is cast in the form of design requirements. It identifies two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation.
  • Sharkey, A. (2019).* Autonomous weapons systems, killer robots and human dignity. Ethics and Information Technology21(2), 75-87.
    • This paper critically examines the relationship between human dignity and Autonomous Weapon Systems (AWS). Three main types of objection to AWS are identified; (i) arguments based on technology and the ability of AWS to conform to international humanitarian law; (ii) deontological arguments based on the need for human judgement and meaningful human control, including arguments based on human dignity; (iii) consequentialist reasons about their effects on global stability and the likelihood of going to war. An account is provided of the claims made about human dignity and AWS, of the criticisms of these claims, and of the several meanings of ‘dignity’. It is concluded that although there are several ways in which AWS can be said to be against human dignity, they are not unique in this respect.
  • Sharkey, N. (2012).* Killing made easy: From joysticks to politics. In P. Lin, G. Bekey, and K. Abney (Eds.), Robot Ethics: The Ethical and Social Implications of Robotics (pp. 111-128). MIT Press. 
    • This chapter provides an overview of novel war technologies, which making killing at a distance easier than ever before. The author argues that the current ethical guidelines the United States government has adopted do not sufficiently address the ethical concerns raised by such technologies. Furthermore, the chapter argues that international ethical guidelines for fully autonomous killer robots are urgently needed. 
  • Sharkey, N. (2009).* Death strikes from the sky: The calculus of proportionality. IEEE Technology and Society Magazine28(1), 16-19.
    • The use of unmanned aerial vehicles (UAVs) in the conflict zones of Iraq and Afghanistan for both intelligence gathering and “decapitation” attacks has been heralded as an unprecedented success by U.S. military forces. This article argues that there is a danger of over-trusting and overreaching the technology, particularly with respect to protecting innocents in war zones; there are ethical issues and pitfalls. The article argues that it is time to reassess the meanings of discrimination and proportionality in the deployment of UAVs in 21st century warfare.
  • Silver, D., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529, 484–489. https://doi.org/10.1038/nature16961
    • This paper introduces the AlphaGo algorithm for learning to play the Go boardgame. The authors’ approach involves learning deep neural networks for the value function used to evaluate board positions and the policy function used to select moves. The program published in this paper was the first to defeat a professional human player in full-sized Go. After additional training and adjustment this program later won 4-1 against world champion Lee Sedol.
  • Silver, D., et al. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354–359. https://doi.org/10.1038/nature24270
    • This paper introduces an algorithm for learning to play Go called AlphaGo Zero. The method builds on authors’ previously published AlphaGo program that was trained by combination of self-play and domain-expert supervised human gameplay data. The primary distinction is that AlphaGo Zero requires only self-play to learn value and policy model parameters. This reinforcement learning algorithm requires no human gameplay data or strategy to assist model training. AlphaGo Zero, given only the rules of the game, achieved 100-0 against the previously published AlphaGo program.
  • de Solla Price, D. J. (1964). Automata and the origins of mechanism and mechanistic philosophy. Technology and Culture, 5(1), 9–23. https://doi.org/10.2307/3101119
    • This essay describes the development of mechanistic philosophy and its relationship with automata. The essay discusses whether contemporary development of artificial automata motivated the growth of mechanistic philosophy. The author argues that the technological developments in mechanical devices, scientific theory, and mechanistic philosophy are part of a proposed intellectual tradition concerning automata. The essay describes historical ideas about automata and connects them to the origins of mechanistic philosophy. Modern readers should note that arguments in this essay rely on an outdated teleological view of the significance of premodern automata on the development of mechanistic philosophy.
  • Sparrow, R. (2007).* Killer robots. Journal of Applied Philosophy24(1), 62-77.
    • This paper considers the ethics of the decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime, arguing that no current answer to this question is ultimately satisfactory. The paper argues that it is a necessary condition for fighting a just war, under the principle of jus in bellum that someone can be justly held responsible for deaths that occur in the course of the war and as this condition cannot be met in relation to deaths caused by an autonomous weapon system, it would therefore be unethical to deploy such systems in warfare.
  • Szegedy, C., et al. (2013).* Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199
    • This paper reports two counterintuitive properties deep neural networks. First, the authors find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, the authors find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. 
  • Tesauro, G. (1995). Temporal difference learning and TD-Gammon. Communications of the ACM, 38(3), 58–68.
    • This paper describes an algorithm for a playing computer backgammon program called TD-Gammon. The primary contribution of this work is to demonstrate how the agent can learn the policy to evaluate its position in the game. The author’s temporal difference approach refers to their model optimizing the learned parameters to reduce difference between previous position and its evaluation of the current position. This early work proposing a learned value function by evaluating past positions was foundational to modern methods that learn by simulating future positions.
  • Vamplew, P., et al. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology20(1), 27-40.
    • This article argues that ethical frameworks for AI which consider multiple potentially conflicting factors can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. The article argues that a multiobjective MEU paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. The article examines examine existing approaches to multiobjective AI, and identifies how these can contribute to the development of human-aligned intelligent agents.
  • Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine Learning, 8(3–4), 279–292.
    • This paper introduces a foundational algorithm and concepts to reinforcement learning. Their framework considers agents in a state that can take perform actions to transition to other states. The Q-learning algorithm describes how agents assign a numerical reward score to potential actions and can learn to take actions that maximize expected reward. This is called a model-free algorithm as the agent does not require a model of the state—in this algorithm, the agent only needs to observe the state. 
  • Yudkowsky, E. (2006).* Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. M. Cirkovic (Eds.), Global Catastrophic Risks (pp. 308–345). Oxford University Press.
    • This paper argues that the greatest danger of artificial intelligence is that individuals have a false understanding of it. Specifically, the paper argues that our tendency to anthropomorphize AI limits truly understanding it. 

Chapter 19. Troubleshooting AI and Consent (Meg Leta Jones and Elizabeth Edenberg)⬆︎

  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. John Wiley & Sons.
    • This book argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the “New Jim Code,” the author examines how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. The book makes the case for race itself as a kind of technology, designed to stratify and sanctify social injustice in the architecture of everyday life.
  • Bostrom, N. (2014).* Superintelligence: Paths, dangers, strategies. Oxford University Press.  
    • This book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.
  • Brayne, S. (2017). Big data surveillance: The case of policing. American sociological review82(5), 977-1008.
    • This article examines the intersection of two structural developments: the growth of surveillance and the rise of “big data.” Drawing on observations and interviews conducted within the Los Angeles Police Department, the paper offers an empirical account of how the adoption of big data analytics does—and does not—transform police surveillance practices. It argues that the adoption of big data analytics facilitates amplifications of prior surveillance practices and fundamental transformations in surveillance activities.
  • Breen, S., et al. (2020). GDPR: Is your consent valid? Business Information Review37(1), 19-24.
    • This article explores the philosophical background of consent and examines the circumstances which were the point of departure for the debate on consent and attempts to develop an understanding of it in the context of the growing influence of information systems and the data-driven economy. The article argues that the General Data Protection Regulation (GDPR) has gone further than any other regulation or law to date in developing an understanding of consent to address personal data and privacy concerns.
  • Bridges, K. M. (2017).* The poverty of privacy rights. Stanford University Press.
    • This book argues that poor mothers in America have been deprived of the right to privacy. Presenting a holistic view of how the state intervenes in all facets of poor mothers’ privacy, the author argues that the Constitution has not been interpreted to bestow these women with family, informational, and reproductive privacy rights. The book further argues that until cultural narratives that equate poverty with immorality are disrupted, poor mothers will continue to be denied this right.
  • Broussard, M. (2018).* Artificial unintelligence: How computers misunderstand the world. MIT Press.
    • Making a case against technochauvinism―the belief that technology is always the solution―this book argues that that social problems will not inevitably retreat before a digitally enabled Utopia. The book argues that understanding the fundamental limits of technological capabilities will help the public to make better ethical choices concerning its implementation. 
  • Browne, S. (2015).* Dark matters: On the surveillance of blackness. Duke University Press.
  • This book argues that contemporary surveillance technologies and practices are informed by the long history of racial formation and by the methods of policing black life under slavery, such as branding, runaway slave notices, and lantern laws. Placing surveillance studies into conversation with the archive of transatlantic slavery and its afterlife, the book draws from black feminist theory, sociology, and cultural studies. The book asserts that surveillance is both a discursive and material practice that reifies boundaries, borders, and bodies around racial lines, so much so that the surveillance of blackness has long been, and continues to be, a social and political norm. 
  • Casonato, C. (2021). AI and constitutionalism: The challenges ahead. In B. Braunschweig & M. Ghallab (Eds.), Reflections on Artificial Intelligence for Humanity (pp. 127-149). Springer. https://doi.org/10.1007/978-3-030-69128-8_9
    • This article promotes a human-centered approach to AI through a lens of constitutionalism. The author considers AI decision-making within contexts such as democracy, human rights, big data, and privacy. A set of new human rights is proposed as a constitutionally based human-centered framework for AI.
  • Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.
    • This book discusses contemporary capitalism and its basis in data colonialism, drawing links between the colonial treatment of land and natural resources, and the current treatment of personal data by corporations. The authors theorize this complex form of data colonialism and turn the conversation to the future, discussing options for resistance.
  • Ferguson, A. G. (2017).* The rise of big data policing: Surveillance, race, and the future of law enforcement. NYU Press.
    • This book provides an overview of new technologies used in policing and argues for increased public awareness of the consequences of big data surveillance as a law enforcement tool. The book argues that technologies may distort constitutional protections but may also improve police accountability and remedy underlying socio-economic risk factors that encourage crime. 
  • Fotopoulou, A. (2020). Conceptualising critical data literacies for civil society organisations: agency, care, and social responsibility. Information, Communication & Society. https://doi.org/10.1080/1369118X.2020.1716041
    • This article explores data literacy and the debate surrounding its conceptualization. It advances this debate through questioning the usefulness of the concept. The author highlights the necessity for models and frameworks that promote data literacy in the public and civil society organizations. 
  • Grigorovich, A., & Kontos, P. (2020). Towards responsible implementation of monitoring technologies in institutional care. The Gerontologist, 60(7), 1194-1201. https://doi.org/10.1093/geront/gnz190
    • This paper discusses the implications of the influx of monitoring technologies in institutional care settings. The positive assumptions and sudden push for the integration of these technologies results in gaps in current knowledge and literature, such as the blurred understandings of consent. This review of current scholarship on monitoring technologies notes weak evidence of actual improvements and indications of unforeseen risks. The authors call for a more rigorous understandings of this technology, and evidence of its risks and benefits in the medical setting.
  • Giannopoulou, A. (2020). Algorithmic systems: The consent is in the detail? Internet Policy Review9(1).
    • This article examines the transformation of consent in order to assess how the concept in itself as well as the applied models of consent can be reconciled to correspond not only to current data protection normative frameworks but also to algorithmic processing technologies. This particularly pressing area of safeguarding a fundamental aspect of individual control over personal data in the algorithmic era is interlinked with practical implementations of consent in the technology used. Moreover, it relates to adopted interpretations of the concept of consent, to the scope of application of personal data, as well as to the obligations enshrined in them.
  • Hibbin, R. A., et al. (2018). From “a fair game” to “a form of covert research”: Research ethics committee members’ differing notions of consent and potential risk to participants within social media research. Journal of Empirical Research on Human Research Ethics, 13(2), 149-159. https://doi.org/10.1177/1556264617751510
    • This document looks at research ethics committee’s (REC) approaches to social media in terms of balancing ethical principles and public availability. Focusing on REC members from the United Kingdom, the authors investigate the challenges surrounding risk and consent that social media poses. The paper concludes that these challenges are actively considered by REC members and that their approaches to social media vary based on level of experience. 
  • Jesus, V. (2020). Towards an accountable web of personal information: The web-of-receipts. Institute of Electrical and Electronics Engineers Access8, 25383-25394.
    • This paper reviews the current state of consent and tie it to a problem of accountability. The paper argues for a different approach to how the Web of Personal Information operates: the need of an accountable Web in the form of Personal Data Receipts which are able to protect both individuals and organisation. 
  • Kaissis, G. A., et al. (2020). Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence, 2(6), 305–311. https://doi.org/10.1038/s42256-020-0186-1
    • This article explores the relationship between AI and medical imaging, the potential for algorithm training in this field, and the obstacles of accessibility and patient privacy. It advocates for secure and privacy-preserving AI as a way to balance the protection of patient privacy with the revolutionizing possibilities of AI in medical imaging and clinical routine. 
  • Kim, N. S. (2019).* Consentability: Consent and its limits. Cambridge University Press.
    • This book analyzes the meaning of consent, introduces a consentability framework, and suggests ways to improve the conditions of consent and reduce opportunism. The book considers activities in three different categories. First, self-directed activities; second, activities that have to do with a persons’ bodily integrity; and third, novel procedures or cutting-edge experiments and whether or not people should be allowed to consent to something that’s never been done before where there is little information about potential consequences.
  • Miller, F. G., & Wertheimer, A. (2010).* The ethics of consent: Theory and practice. Oxford University Press.
    • This book assembles the contributions of a distinguished group of scholars concerning the ethics of consent in theory and practice. Part One addresses theoretical perspectives on the nature and moral force of consent, and its relationship to key ethical concepts such as autonomy and paternalism. Part Two examines consent in a broad range of contexts, including sexual relations, contracts, selling organs, political legitimacy, medicine, and research.
  • Müller, A., & Schaber, P. (2018).* The Routledge Handbook of the Ethics of Consent. Routledge.
    • This handbook is divided into five main parts: general questions, normative ethics, legal theory, medical ethics, and political philosophy. This book examines debates and problems in these fields including: the nature and normative importance of consent, paternalism, exploitation and coercion, privacy, sexual consent, consent and criminal law, informed consent, organ donation, clinical research, and consent theory of political obligation and authority.
  • Norval, C., & Henderson, T. (2019). Automating dynamic consent decisions for the processing of social media data in health research. Journal of Empirical Research on Human Research Ethics. doi.org/10.1177/1556264619883715
    • This article presents an exploratory user study (n = 67) in which the authors find that they can predict the appropriate flow of health-related social media data with reasonable accuracy, while minimizing undesired data leaks. The article then deconstructs the findings of this study, identifying and discussing a number of real-world implications if such a technique were put into practice.
  • O’Connor, Y., et al. (2021). Implementing electric consent aimed at people living with dementia and their caregivers: Did we forget those who forget? Proceedings of the 54th Hawaii International Conference on System Sciences, 3893-3902. http://hdl.handle.net/10125/71088
    • This paper questions the universal applicability of informed electronic consent (eConsent) by investigating the use of eConsent in the context of people living with dementia and their caregivers. Combining both political and technological perspectives, this study conducts a market review of mobile health applications. The authors note that the requirements for eConsent do not properly determine the capacity of the individual to understand the information presented to them and give informed consent, and argue that these issues are exacerbated for people with dementia. Overall, the critiques of eConsent in the context of people living with dementia can be applied to eConsent as a whole, and serve as a starting point for its future improvement. 
  • Pagallo, U. (2020). On the principle of privacy by design and its limits: Technology, ethics, and the rule of law. In S. Chiodo & V. Schiaffonati (Eds.), Italian Philosophy of Technology: Socio-Cultural, Legal, Scientific and Aesthetic Perspectives on Technology (pp. 111-127). Springer. https://doi.org/10.1007/978-3-030-54522-2_8
    • This chapter critically examines the principle of privacy by design. It looks at technological limits as well as ethical and legal considerations of the current debate surrounding privacy by design. In locating three distinct limits, the author proposes a more ethically sound version of privacy by design.
  • Papadimitriou, S., et al. (2019). Smart educational games and consent under the scope of General Data Protection Regulation. In 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA) (pp. 1-8). Institute of Electrical and Electronics Engineers.
    • This article focuses on General Data Protection Regulation’s principle of personal data processing consent and seeks balance between gaming amusement, educational benefits and regulatory compliance. The article combines legal theory and computer science in order to propose applicable solutions with the form of guidelines towards gaming stakeholders in general as well as educational gaming stakeholders in specific.
  • Pasquale, F. (2018).* The black box society. Harvard University Press.
    • This book exposes how powerful interests abuse secrecy for profit and explains ways to rein them in. It argues that demanding transparency is only the first step toward individuals having control of how big date affects their lives, and that an intelligible society would assure that key decisions of its most important firms are fair, non-discriminatory, and open to criticism. 
  • Price, W. N., & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25(1), 37–43. https://doi.org/10.1038/s41591-018-0272-7
    • This paper discusses the legal and ethical issues of big data in the medical field, specifically in terms of patient privacy. It outlines the limits of current policy such as the US federal Health Insurance Portability and Accountability Act (HIPAA) and its Privacy rule. The paper argues that going forward, a balance must be struck to avoid excessive under- or over-protection of privacy. 
  • Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5-14. https://doi.org/10.1007/s10676-017-9430-8
    • This paper proposes a framework, or an algorithmic social contract, with which AI can be regulated. Deemed “society-in-the-loop,” the author combines the concepts of human-in-the-loop and social contract theory to envision a governing paradigm between humans and algorithmic systems. 
  • Rule, J. B. (2007).* Privacy in peril: How we are sacrificing a fundamental right in exchange for security and convenience. Oxford University Press.
    • This book examines how personal data made available to virtually any organization for virtually any purpose is apt to surface elsewhere, applied to utterly different purposes. The book argues that as long as individuals willingly accept the pursuit of profit or cutting government costs as sufficient reason for intensified scrutiny over their lives, then privacy will remain endangered.
  • Sawchuk, K. (2019). Private parts: Aging, AI, and the ethics of consent in subscription-based economies. Innovation in Aging3(1). https://doi.org/10.1093/geroni/igz038.082
    • This paper explores Artificial Intelligence (AI) as a technological design offered to assist elder-care based on tracking individual behavior amassed in data bases that are given predictive value through algorithm-identified normative patterns. Drawing examples from ethnographic research conducted at the 2019 Consumer Electronics Show, the paper focuses on the ethical dilemmas of privacy, security, consent, and identity in home surveillance systems and financialization of personal data in AI subscription-based services. The paper argues that subscription-based economy exploits older individuals by sharing their lifestyle profiles, health information, economic status, and consumer preferences within powerful corporate networks such as Google and Amazon.
  • Thorstensen, E. (2018, July). Privacy and future consent in smart homes as assisted living technologies. In International Conference on Human Aspects of IT for the Aged Population (pp. 415-433). Springer.
    • With the advent of the General Data Protection Regulative (GDPR), there are clear regulations demanding consent to automated decision-making regarding health. This article opens up some of the possible dilemmas in the intersection between the smart home ambition and the GDPR with specific attention to the possible trade-offs between privacy and well-being through a future case, to the learning goals in a future smart home with health detection systems and presents different approaches to advance consent.
  • Ytre-Arne, B., & Das, R. (2019). An agenda in the interest of audiences: Facing the challenges of intrusive media technologies. Television & New Media20(2), 184-198.
    • This article formulates a five-point agenda for audience research, drawing on implications arising out of a systematic foresight analysis exercise on the field of audience research, conducted between 2014 and 2017, by the research network Consortium on Emerging Directions in Audience Research (CEDAR). The agenda includes substantial and intellectual priorities concerning intrusive technologies, critical data literacies, labour, co-option, and resistance, and argues for the need for research on these matters, in the interest of audiences.

Chapter 20. Is Human Judgment Necessary? Artificial Intelligence, Algorithmic Governance, and the Law (Norman W. Spaulding)⬆︎

  • Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1-13. https://doi.org/10.1080/1369118X.2016.1216147
    • This article aims to discuss algorithms from a social science perspective. First, Beer analyzes the issue of social power as it relates to algorithms. Second, Beer focuses on how the notion of an algorithm is conceived in order to enable researchers to better understand how algorithms play a role in social ordering processes. 
  • Binns, R. (2020). Human Judgment in algorithmic loops: Individual justice and automated decision‐making. Regulation & Governance. https://doi.org/10.1111/rego.12358      
    • This article argues that individual justice can only be meaningfully served through human judgment rather than artificial intelligence. Binns contends that individual justice should be distinguished from other forms of justice. Additionally, the author points to two main challenges that result from algorithmic judgements: first, that individual justice will often conflict with algorithm‐driven consistency and fairness, and second, that algorithmic systems are incapable of respecting individual justice. 
  • Danaher, J. (2019). The rise of the robots and the crisis of moral patiency. AI & Society, 34(1), 129–136. https://doi.org/10.1007/s00146-017-0773-9    
    • This paper asserts that the rise of robots and artificial intelligence is likely to create a crisis of moral patiency, making humans less willing and able to act in the world as moral agents. The consequences of this have dangerous implications for politics and the social world.  
  • Diakopoulos, N. (2015). Algorithmic accountability. Digital Journalism, 3(3), 398-415. https://doi.org/10.1080/21670811.2014.976411      
    • This article examines algorithmic accountability reporting as a mechanism that has the potential to amplify power structures and biases that computational artifacts perpetuate in society. It uses five cases of algorithmic accountability performance using journalistic reverse engineering strategies to provide insight into method and application in the field of journalism. It also assesses transparency models on a broader scale.
  • Epstein, R., et al. (Eds.). (2008).* Parsing the Turing test: Philosophical and methodological issues in the quest for the thinking computer. Springer.
    • This edited volume features psychologists, computer scientists, philosophers, and programmers who examine the philosophical and methodological issues surrounding the search for true artificial intelligence. Questions authors explore include “Will computers and robots ever think and communicate the way humans do?” and “When a computer crosses the threshold into self-consciousness, will it immediately jump into the Internet and create a World Mind?”
  • Finn, E. (2017).* What algorithms want: Imagination in the age of computing. The MIT Press.
    • This book explores how the algorithm has roots in mathematical logic, cybernetics, philosophy, and magical thinking. Finn argues that algorithms use concepts sourced from idealized computation and applies it to a non-ideal reality, yielding unpredictable responses. To address the gap between abstraction and reality, Finn advocates for the creation of a model of “algorithmic reading” and scholarship which considers process.
  • Grgic-Hlaca, N., et al. (2018). Human Perceptions of Fairness in Algorithmic Decision Making. In P.-A. Champion, F. Gandon, & L. Medini (Eds.), Proceedings of the 2018 World Wide Web Conference (pp. 903-912). International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/3178876.3186138
    • This research article examines AI and the concept of distributive fairness (the fairness of decision outcomes). The authors propose methods for procedural fairness that consider the input features used in the decision process and evaluate the moral judgments of humans regarding the use of these features. The authors utilize two real-world datasets using human surveys on the Amazon Mechanical Turk (AMT) platform, and submodular mechanisms are used to optimize the tradeoff between procedural fairness and prediction accuracy. The authors find that procedural fairness may be achieved with little cost to outcome fairness through the use of these technologies. 
  • Gunkel, D. (2012).* The machine question: Critical perspectives on AI, robots, and ethics. The MIT Press.
    • Gunkel examines the “machine question” in moral philosophy, which aims to determine whether, and to what degree, human-made intelligent and autonomous machines can have moral responsibilities and moral consideration. Traditional philosophical notions are challenged by the machine question, as they posit technology as a tool for human uses rather than moral agents.
  • Gunkel, D. (2014). A vindication of the rights of machines. Philosophy & Technology, 27(1), 113–132. https://doi.org/10.1007/s13347-013-0121-z      
    • This article argues that artificial intelligences cannot be excluded from moral consideration, which calls not only for an extension of rights to machines, but an examination into the configuration of moral standing.
  • Haraway, D. J. (1991).* A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In D. J. Haraway (Ed.), Simians, Cyborgs and Women: The Reinvention of Nature. Routledge.      
    • Haraway’s essay gives a post-structuralist account of the term “cyborg” as a concept that resists strict categorization, not simply a distinction of “human” from “machine” or “human” from “animal,” but a combination of these concepts.
  • Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. https://doi.org/10.1016/j.bushor.2018.03.007    
    • This article argues that rather than having the goal of replacing humans with AI, developers of the technology should work toward complementing the independent strengths of both humans and robots. The author holds that the holistic and intuitive nature of humans in organizational decision-making should be maintained, while computational processing capacities are expanded with the use of AI. 
  • Kitchin, R. (2017).* Thinking critically about and researching algorithms. Information, Communication and Society 20(14). https://doi.org/10.1080/1369118X.2016.1154087    
    • This paper synthesizes current literature on algorithms and develops new arguments about their study. This includes the need to focus critical attention on algorithms in light of their increased role in society, how to best understand algorithms conceptually, challenges for researching algorithms, and the differing ways algorithms can be empirically studied.
  • Kraemer, F., et al. (2010).* Is there an ethics of algorithms? Ethics and Information Technology, 13(3), 251-260. https://doi.org/10.1007/s10676-010-9233-7
    • In this article, the authors argue that algorithms can be value-laden, meaning that designers may have justified reasons for creating differential algorithms. To illustrate this claim, the authors use the example of algorithms used in medical analysis, which can be designed differently depending on the priorities of the software designers, such as avoiding false negatives. They go on to contribute guidelines for ethical issues in algorithm design.
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1). https://doi.org/10.1177/2053951718756684      
    • This article outlines an online experiment exploring perceptions of algorithmic management using managerial decisions, which required mechanical or human skills to measure perceived fairness, trust, and emotional response. Lee finds that with mechanical tasks, algorithmic and human-made decisions were perceived as equally fair and trustworthy; however, human managers’ fairness and trustworthiness were attributed to the manager’s authority, whereas algorithms’ fairness and trustworthiness were attributed to their perceived efficiency and objectivity. With human tasks, algorithmic decisions were perceived as less fair and trustworthy and evoked more negative emotional responses. Lee’s findings suggest that task characteristics matter in people’s experiences with these technologies.
  • Lepri, B., et al. (2017). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611–627. https://doi.org/10.1007/s13347-017-0279-x  
    • This article argues that while there are some potential benefits to algorithmic decision-making, the potential of increased discrimination and opacity raise concerns, especially when addressing complex social problems. The authors propose various technical solutions designed to improve fairness and transparency in algorithmic decision-making, highlighting the Open Algorithms (OPAL) project as an example of advanced AI supporting the advancement of democracy and development.
  • Lumbreras, S. (2017). The limits of machine ethics. Religions, 8(5). https://doi.org/10.3390/rel8050100        
    • Lumbreras provides a framework to classify the methodology employed in the field of machine ethics. The limits of machine ethics are discussed in light of design techniques that only express values imported by the programmer.
  • Lustig, C., & Nardi, B. (2015). Algorithmic authority: The case of Bitcoin. In 2015 48th Hawaii International Conference on System Sciences (pp. 743-752). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/hicss.2015.95
    • In this paper, the authors propose a new concept for understanding the role of algorithms in daily life: algorithmic authority. Algorithmic authority is the power of algorithms to direct human action and to impact which information is considered true. Lustig & Nardi apply their theory to the culture of Bitcoin users, assessing their trust in the algorithm. They found that Bitcoin users prefer algorithmic authority to conventional institutions, which they see as untrustworthy, acknowledging the need for algorithmic authority to be mediated by human judgment. 
  • Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology, 18(4), 243-256. https://doi.org/10.1007/s10676-015-9367-8    
    • Malle discusses the overlap between robot ethics (how humans should design and treat robots) and machine morality (how robots can have morality), arguing that robots can be designed with human moral characteristics. Malle suggests that morally competent robots can effectively contribute to society in the same way humans can.
  • Mittelstadt, B. D., et al. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679
    • Gaps between design and actual functioning of algorithms can have serious consequences for individuals and societies. This article provides an outline on the debate on the ethics of algorithms and evaluates the current literature to identify topics that need further consideration.
  • Moor, J. H. (Ed.). (2003).* The Turing test: The elusive standard of artificial intelligence. Springer.    
    • This book discusses the influence of Alan Turing, including “Computing Machinery and Intelligence,” his pre-eminent article on the philosophy of artificial intelligence, which included a presentation of his famous imitation game. Turing predicted that by the year 2000, the average interrogator would not have a greater than 70% chance of making the correct identification in the imitation game. Using the results of the Loebner 2000 contest, as well as breakthroughs in the field of AI, Moor argues that although there has been much progress, Turing’s prediction has not been borne out.
  • Newell, S., & Marabelli, M. (2015). Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of ‘datification.’ The Journal of Strategic Information Systems, 24(1), 3–14. https://doi.org/10.1016/j.jsis.2015.02.001  
    • This article draws attention to the tension between businesses—which increasingly profile customers and personalize products and services—and individuals, who are often unaware of how the data they produce are being used, by whom they are being used, and with what consequences. The authors highlight how issues associated with privacy, control, and dependence arise and suggest that the social and ethical concerns related to the strategic exploitation of digitized technologies by businesses should be more thoughtfully discussed. 
  • Raghu, M., et al. (2019). The algorithmic automation problem: Prediction, triage, and human effort. arXiv preprint arXiv:1903.12220
    • This article argues that automation goes beyond comparison of human and algorithmic performance of tasks; it also involves the decision of which instances of tasks should be assigned to an algorithm in the first place. The authors develop a general framework as an optimization problem to show how basic heuristics can lead to performance gains while also showing how effective automation depends on estimating both algorithmic and human error on a case-by-case basis. 
  • Selbst, A. D., & Barocas, S. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085-1139. https://ir.lawnet.fordham.edu/flr/vol87/iss3/11/
    • This article seeks to distinguish machine learning from other forms of decision-making. The authors argue that machine learning models can be both inscrutable and non-intuitive and that these are related, but distinct, properties. Addressing non intuitiveness requires providing a satisfying explanation for why the rules are what they are. This article argues for other mechanisms for normative evaluation or machine learning. The authors find that to understand the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself. 
  • Trausan-Matu, S. (2017). Is it possible to grow an I–Thou relation with an artificial agent? A dialogistic perspective. AI & Society, 34(1), 9-17. https://doi.org/10.1007/s00146-017-0696-5      
    • This paper aims to analyze the question of whether it is possible to develop an I-Thou relationship with an artificial conversational agent, discussing possibilities and limitations. Novel perspectives from various disciplines are discussed.
  • Van de Voort, M., et al. (2015).* Refining the ethics of computer-made decisions: A classification of moral mediation by ubiquitous machines. Ethics Information Technology, 17(1), 41–56. https://doi.org/10.1007/s10676-015-9360-2                              
    • This article investigates computer-made ethical decisions and argues that machines have morality not only when they mediate the actions of humans, but also work to mediate morality itself via decisions within their relationships to human actors. The authors accordingly define four types of moral relations.
  • van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics, 25(3), 719–735. https://doi.org/10.1007/s11948-018-0030-8      
    • This article offers a deeper look into the reasons given for developing artificial moral agents (AMAs), arguing the machine ethicists must provide good reasons to build such entities. Until such work is complete, development of AMAs should not continue.
  • Wallach, W., & Allen, C. (2009).* Moral machines: Teaching robots right from wrong. Oxford University Press.            
    • Wallach and Allen argue that machines do not use explicit moral reasoning in their decision-making, and thus there is a need to create embedded morality as these machines continue to make important decisions. This new field of machine morality or machine ethics will be crucial for designers.
  • Winograd, T. (1990).* Thinking machines: Can there be? Are we? In D. Partridge & Y. Wilks (Eds.), The foundations of artificial intelligence: A sourcebook (pp. 167-189). Cambridge University Press.          
    • Winograd explores a view attributed to futurologists, who believe that a new species of thinking machines, machina sapiens, will emerge and become dominant by applying their extreme intelligence to human problems. A critique of this view is that computers cannot possibly accurately replicate human intelligence, because their cold logical programming deprives them of vital features such as creativity, judgement, and genuine intentionality. Winograd argues that although it is true that artificial intelligence has yet to achieve things such as creativity and judgement, it has far more basic shortcomings in this vein, as current machines are unable to display common sense, or basic conversational language skills.   
  • Zarsky, T. (2015). The trouble with algorithmic decisions. Science, Technology, & Human Values, 41(1), 118–132. https://doi.org/10.1177/0162243915605575      
    • This article seeks to outline policy making concerns that have arisen due to the rise in the use of algorithmic decision-making tools. Zarsky provides policy makers and scholars with a comprehensive framework for approaching these issues, calling for the usage of an analytical framework that reduces the discussion to two dimensions, (1) the specific and novel problems the process assumedly generates and (2) the specific attributes which exacerbate them. The problems can be reduced to two broad categories: efficiency and fairness-based concerns. Zarsky contends that such problems are usually linked to two salient attributes the algorithmic processes feature—its opaque and automated nature.
  • Ziewitz, M. (2016). Governing algorithms: Myth, mess, and methods. Science, Technology, & Human Values, 41(1), 3–16. https://doi.org/10.1177%2F0162243915608948                  
    • This article aims to provide critical background into the issue of algorithms being viewed as both extremely powerful and difficult to understand. It considers algorithms not only as computational, but also sensitive, and challenges assumptions about agency, transparency, and normativity.

Chapter 21. Sexuality (John Danaher)⬆︎

  • Bloom, P. (2020). Identity, institutions and governance in an AI world: Transhuman relations. Springer.
    • This book explores transhuman relations, and the potential for radical change to identity, institutions, and governance created by interactions with AI. This book proposes that the future of transhuman relations will emphasize infusing AI programming with values of social justice. The book theorizes that transhuman relations will be marked with a concern for protecting the rights and views of all forms of “consciousness” and creating the structures and practices necessary for encouraging a culture of “mutual intelligent design.”
  • Carvalho Nascimento, E., et al. (2018). The “use” of sex robots: A bioethical Issue. Asian Bioethics Review, 10(3), 231–240. https://doi.org/10.1007/s41649-018-0061-0
    • This article presents the current state of the use of female sex robots, reviewing the emerging themes in bioethics discourse on the topic, including sexuality and its deviations, the objectification of women, the relational problems of contemporary life, loneliness, and the reproductive future of the human species. The article also presents problems that arise from the use of sex robots and how bioethics could serve as a medium for thinking about and resolving these challenges.
  • Danaher, J., & McArthur, N. (Eds.). (2017).* Robot sex: Social and ethical implications. MIT Press.
    • This edited volume gathers perspectives from ethics and sociology on the emerging issue of sex with robots. Contributions to the volume define what robot sex is, explore ways in which it can be defended or challenged on ethical grounds, take the perspective of the robot in considering the matter, and reflect on the possibility of robot love. Finally, some contributors articulate visions for the future of robot sex, underlining the importance of evaluating love and intimacy in robot encounters (as opposed to just sex) and emphasizing the impact robot sex will have on society. 
  • Danaher, J., Nyholm, S., & Earp, B. (2018).* The Quantified Relationship. The American Journal of Bioethics, 18(2), 3–19.
    • This article provides a detailed ethical analysis of the Quantified Relationship (QR). The Quantified Self movement, which pursues self-improvement through the tracking and gamification of personal data; the QR applies this to interpersonal, romantic relationships. This article identifies eight core objections to the QR, and counters them by arguing that there are ways in which tracking technologies can be used to support and facilitate good relationships. 
  • de Fren, A. (2009). Technofetishism and the uncanny desires of A.S.F.R. (Alt Sex Fetish Robots). Science Fiction Studies, 36(3), 404–440.
    • This article presents a feminist, art-historical analysis of virtual communities that fetishize artificial women. Central to this fetish is the pleasure of ‘hacking’ the system or denaturalizing common understandings of subjecthood and desire. By drawing analogies between the uncanny artificial bodies at the heart of “alt sex fetish robots,” fantasies, and various historical and artistic antecedents, this essay contributes to the critical understanding of mechanical bodies as objects of desire.
  • Devlin, K. (2018).* Turned on: Science, sex and robots. Bloomsbury Publishing.
    • This popular non-fiction book traces the emerging technology of sex robots from robots in Greek myth and the fantastical automata of the Middle Ages through to the sentient machines of the future that inhabit the prominent AI debate. Devlin compares the ‘modern’ robot to the robot servants in twentieth-century science fiction and offers a historical perspective on the psychological effects of the technology as well as the issues it raises around gender politics, diversity, surveillance and violence.
  • Draude, C. (2011). Intermediaries: Reflections on virtual humans, gender, and the uncanny valley. AI & Society, 26, 319–327.
    • This article provides an analysis of the uncanny valley effect from a cultural and gender studies perspective. The uncanny valley effect describes the eeriness and lack of believability of anthropomorphic artefacts that resemble the ‘real’ thing too strongly. This article offers a gender-critical reading of computer theory by analyzing a classic story of user and artifact (E.T.A. Hoffman’s narration of Olimpia), ultimately arguing for more diverse artefact production.
  • Evans, D. (2010). Wanting the impossible: The dilemma at the heart of intimate human-robot relationships. In Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues (pp. 75–88). John Benjamins Publishing.
    • This chapter makes a philosophical case against the claim that romantic relationships with robots will be more satisfying because robots can be made to conform to the human’s wishes. Evans’ dismissal of this thesis does not rest on any technical limitation in robot building but is instead rooted in a thought experiment comparing two different kinds of partner robots: one capable of rejecting its owner and one which is not.
  • Franceschi, V. (2012). “Are you alive?” Issues in self-awareness and personhood of organic artificial intelligence. Polemos (Roma), 6(2), 225–247.
    • This journal article examines the social and legal position of some uses of artificial intelligence (AI), such as cyborgs, robots, and androids. It argues that AI technologies might advance to the point of overcoming their programming and developing their self-awareness and personalities. The article points to the social and legal inequalities that could occur if these systems significantly shape human experience and choices.  
  • Frank, L., & Nyholm, S. (2017).* Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25, 305–323.
    • This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. Frank and Nyholm present and analyze reasons to answer “yes” or “no” to these questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics, the relationship between consent and free will, and the relationship between consent and consciousness.
  • Gersen, J. S. (2019). Sex machina: Intimacy and artificial intelligence. Columbia Law Review, 119(7), 1793–1810.
    • This paper emphasizes the legal implications flowing from the existence of sex robots; it argues that lawmakers will have to acknowledge the rising importance of digisexuality, i.e., the robot-to-human relationship. The paper explores the positive and negative societal consequences of sex robots and their machine learning systems, especially intimate human-to-human and robot-to-human relationships. In sum, this article considers the legal and ethical questions arising from the proliferation of sex robots.
  • Gutiu, S. (2016). The robotization of consent. In R. Calo, M. Froomkin, & I. Kerr (Eds.), Robot law (pp. 186–212). Edward Elgar Publishing.
    • This chapter explains how sex robots can impact existing gender inequalities and the understanding of consent in sexual interactions between humans. Sex robots are defined by the irrelevancy of consent, replicating existing gender imbalances in emulating and eroticizing female sexual slavery. This chapter discusses the documented harms of extreme pornography, the expected harms of sexbots, connecting these to the legal concepts of harm under Canadian and U.S. legal systems. 
  • Halberstam, J. (2008). Animating revolt/revolting animation: Penguin love, doll sex and the spectacle of the queer nonhuman. In M. Hird & N. Giffney (Eds.), Queering the non/human. Taylor & Francis.
    • This chapter applies a queer theory approach to sex robots, suggesting that new forms of animation – from transgenic mice to female cyborgs and Tamagotchi toys – productively shift the terms and the meaning of the artificial boundaries between humans, animals, machines, states of life and death, animation and reanimation, living, evolving, becoming and transforming. Halberstam brings to the surface the interdependence of reproductive and non-reproductive communities. 
  • Hauskeller, M. (2014). Sex and the posthuman condition. Palgrave McMillan. 
    • This book looks at how sexuality is framed in enhancement scenarios and how descriptions of the resulting posthuman future are informed by mythological, historical and literary paradigms. It examines the glorious sex life humans will allegedly enjoy due to greater control of our emotions, improved capacity for pleasure, and availability of sex robots.
  • Kaufman, E. (2020). Reprogramming consent: Implications of sexual relationships with artificially intelligent partners. Psychology and Sexuality, 11(4), 372–383.
    • This journal article focuses on discussions around sexual consent, and the potential implications of sexual norms and standards for artificial intelligence (AI) technologies. The author bases their argument on data from “Club RealDoll,” and explores how AI systems have identified normative values in different users’ attitudes towards sexual consent. 
  • Keyes, O., et al. (2012). Truth from the machine: Artificial intelligence and the materialization of identity. Interdisciplinary Science Reviews, 46(1-2), 158-175.
    • This article examines the intersection of two criticisms of artificial intelligence (AI): first, that it will lead to identity-based discrimination, and second, that it will disrupt the growth of scientific research. This paper uses case studies to demonstrate that when AI is deployed in scientific research about identity and personality, it can naturalise and reinforce biases. The authors argue that the concerns about scientific knowledge and identity are related, as positioning AI as a source of truth and scientific knowledge can have the effect of lending public legitimacy to harmful ideas about identity.
  • Kubes, T. (2019). New materialist perspectives on sex robots: A feminist dystopia/utopia? Social Sciences, 8(8), 224.
    • This article re-evaluates feminist critiques of sex robots from a new materialist perspective, suggesting that sex robots may not be an exponentiation of hegemonic masculinity to the extent that the technology can be queered. When the beaten tracks of pornographic mimicry are left behind, sex robots may in fact enable new liberated forms of sexual pleasure beyond fixed normalizations, thus contributing to a sex-positive utopian future.
  • Lee, J. (2017). Sex robots: The future of desire. Palgrave Macmillan.
    • This book thinks through the sex robot beyond the human/non-human binary, arguing that non-human sexuality has been at the heart of culture throughout history. Taking a philosophical approach to what the sex robot represents and signifies, this book discusses the roots, possibilities, and implications of the not-so-new desire for sex robots.
  • Levy, D. (2009).* Love and sex with robots: The evolution of human-robot relationships. Gerald Duckworth & Company.
    • This popular non-fiction book consists of two parts, one concerning love with robots and the other concerning sex with robots. Using a range of examples, Levy argues that the ability to feel affection for animate creations is long underway, making physical intimacy a logical next step. Moving from love to sex rather than the other way, this book makes the case that even entities that were once deemed cold and mechanical can soon become the objects of real, human desire.
  • Levy, K. (2014).* Intimate surveillance. Idaho Law Review, 51(3), 679–693.
    • This article considers how new technical capabilities, social norms, and cultural framework are beginning to change the nature of intimate monitoring practices. Focused on practices occurring on an interpersonal level, i.e. in an intimate relationship with two partners, the article examines the relations between data collection, values, and privacy from dating and sex to fertility, fidelity, and finally, abuse. Levy closes with reflections on the role of law and policy in the emerging domain of intimate (self)surveillance.
  • Lieberman, H. (2017).* Buzz: The stimulating history of the sex toy. Pegasus Books.
    • This popular non-fiction book focuses on the history of sex toys from the 1950s to the present, tracing how once taboo devices reached the cultural mainstream. This historical account moves from sex toys as symbols of female emancipation and tools in the fight against HIV/AIDS to consumerist marital aids and, finally, to mainstays in popular culture.
  • Lupton, D. (2014).* Quantified sex: A critical analysis of sexual and reproductive self-tracking using apps. Culture, Health & Sexuality, 17(4), 440–453.
    • This article presents a critical analysis of computer apps used to self-track features of users’ sexual and reproductive activities and functions. The analysis reveals that such apps represent sexuality and reproduction in certain defined and limited ways that work to perpetuate normative stereotypes and assumptions about women and men as sexual and reproductive subjects, and exposes issues concerning privacy, data security and the use of the data collected by these apps. Lupton suggests ways to ‘queer’ self-tracking technologies in response to these issues.
  • McArthur, N., & Twist, M. (2017).* The rise of digisexuality: Therapeutic challenges and possibilities. Sexual and Relationship Therapy, 32(3–4), 334–344.
    • This article argues that clinicians in the psychological setting should be prepared to work with ‘digisexuals’: people whose primary sexual identity comes through the use of radical new sexual technologies. Guidelines for helping individuals and relational systems make informed choices regarding participation in technology-based activities of any kind, let alone ones of a sexual nature, are few and far between. This article articulates a framework for understanding the nature of digisexuality and how to approach it.
  • Mindell, D. (2015). Our robots, ourselves: Robotics and the myths of autonomy. Viking.
    • Departing from the future tense that is common in conversations about robots, this book investigates the most advanced robotics that currently exist. Deployed in high atmosphere, deep ocean, and outer space, these robotic applications show that the stark lines between human and not human, or manual and automated, are not helpful. This book clarifies misconceptions about the autonomous robot to talk about the human presence at the center of the technological landscape.
  • Nørskov, M. (2016). Social robots: Boundaries, potential, challenges. Routledge
    • This book introduces cutting edge research on social robotics, referring to robots used for entertainment, partnership, caregiving, etc. The author critiques the development of these artificial intelligence (AI) technologies based on the challenges they pose on society.  They argue that social robots will eventually become their own category of people, developing an equal mind to human beings in terms of cognitive behaviour and interaction.
  • Robinson, S. (2020). Trust, transparency, and openness: How inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI). Technology in Society, 63. https://doi.org/10.1016/j.techsoc.2020.101421
    • This paper focuses on the openness of Nordic artificial intelligence (AI). The paper argues that institutionalizing culture values within AI policies promotes greater trust in AI technologies and machines. The analysis considers three different values present in Nordic culture: ethics, privacy, and autonomy.
  • Richardson, K. (2020). Sex robots: The end of love. Polity Press.
    • This book is an anthropological critique of sex robots, here taken up as points of insight into how women and girls are imagined and how porn, prostitution, and the sexual exploitation of children drive the desire for them. Richardson argues that sex robots are produced within a framework of ‘property relations,’ in which egocentric Man (and his disconnection from Woman) shapes the building of robots and AI. This makes sex robots a major threat to the possibility of love and connection.
  • van Oost, E. Materialized gender: How shavers configure the users’ femininity and masculinity. In N. Oudshoorn & T. Pinch (Eds.), How Users Matter: The Co-Construction of Users and Technologies. MIT Press.
    • This chapter is part of an edited volume that examines how users shape technology from design to implementation. Van Oost uses the case study of shaving devices marketed to men or women to show design trajectories use “gender scripts”: particular representations of the male and female consumer that become inscribed in the design of the artefacts. Her analysis suggests that technical competence is inscribed in artefacts marketed to men, while products targeting women inscribe disinterest in technology on their user.  
  • Verbeek, P.-P. (2005). Artifacts and attachment: A post-script philosophy of mediation. In H. Harbers (Ed.), Inside the politics of technology: Agency and normativity in the co-production of technology and society (pp. 125–146). Amsterdam University Press.
    • This chapter uses Bruno Latour’s theory of technological mediation to explain how technologies foster attachment on the part of their users. For attachment to occur, artefacts should be present in an engaging way, stimulating users to participate in their functioning. Attachment always involves the materiality of the artefact more than its functioning, meaning that users also develop a bond with the machinery and material operation of artefacts.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
    • This book defines ‘surveillance capitalism’ as a novel market form and a specific logic of capitalist accumulation. If industrial capitalism exploits nature, surveillance capitalism exploits human nature through the installation of a global architecture of computer mediation that Zuboff calls “Big Other.” Through these architectures’ hidden mechanisms of extraction, commodification, and control, surveillance capitalism erodes the human potential for self-determination, threatening to core values such as freedom, democracy, and privacy.

IV. Perspectives & Approaches

Chapter 22. Perspectives on Ethics of AI: Computer Science (Benjamin Kuipers)⬆︎

  • Abebe, R., et al. (2020). Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 252-260). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372871
    • This publication presents four roles that computing research can play in addressing social problems. These four roles are: (1) serving as a diagnostic, (2) helping formalize how social problems are defined, (3) understanding what is possible via technical tools, and (4) finally, helping to illuminate long-standing social problems to the public. The framework describes the potential of computational research to affect positive social change and the limits regarding computational research’s ability to solve societal problems on its own.
  • Ali, M., et al. (2019). Ad delivery algorithms: The hidden arbiters of political messaging. In L. Lewin-Eytan, D. Carmel, & E. Yom-Tov (Eds.), 14th ACM International Conference on Web Search and Data Mining (WSDM) (pp. 13-21). Association for Computing Machinery. https://doi.org/10.1145/3437963.3441801
    • This study investigates the impact of Facebook’s ad delivery algorithms for political ads, specifically analyzing political polarization as one of its effects. Ali and colleagues demonstrate that the ad delivery algorithms inhibit campaigns from reaching diverse groups of voters. Finally, the investigation demonstrates that the current reform efforts aimed at improving the reach of campaigns to diverse groups and reduce polarization are not sufficient. Thus, the authors suggest requiring more public transparency for algorithms used to deliver political campaign ads.
  • Awad, E., et al. (2018). The moral machine experiment. Nature, 563(7729), 59-64.
    • This article aims to address concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide these machines. The authors utilize the Moral Machine, an online experimental platform, to gather data which is analyzed to come to a recommendation as to how machine decision making should be determined.
  • Bonnefon, J. F., et al. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
    • Utilitarian objectives for autonomous vehicles (where the vehicle sacrifices the passengers for the greater good) can inform choices for helping autonomous vehicles make ethical decisions. Drawing from six Amazon Mechanical Turk studies, this paper found that participants approved of utilitarian objectives but would prefer to ride and purchase vehicles that prioritize passenger safety above the safety of others. 
  • Cowgill, B., et al.  (2020). Biased programmers? Or biased data? A field experiment in operationalizing ai ethics. In P. Biro & J. Hartline (Eds.), Proceedings of the 21st ACM Conference on Economics and Computation (pp. 679-681). Association for Computing Machinery. https://doi.org/10.1145/3391403.3399545
    • This paper covers the findings from a large-scale experiment with 400 machine learning engineers to understand actions throughout the machine learning development pipeline that leads to biased predictors. The study found that the majority of biased predictions are functions of biased training data. The study further found that reminding engineers about the possibility of bias can be almost as effective as de-biasing algorithms. 
  • Flanagan, O. (2016). The geography of morals: Varieties of moral possibility. Oxford University Press.
    • This book uses comprehensive dialogue between cultural and psychological anthropology, empirical moral psychology, and behavioral economics with the aim of presenting and exploring cross-cultural and world philosophy. 
  • Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and machines, 14(3), 349-379.
    • This paper first presents a concept of agency of Artificial Agents, and then explores the subsequent concerns raised surrounding the morality and responsibility of said agents. The authors argue that there is substantial and important scope for the concept of an Artificial moral agent not necessarily exhibiting free will, mental states or responsibility.
  • Gibbs, J. C. (2019). Moral development and reality: Beyond the theories of Kohlberg, Hoffman, and Haidt. Oxford University Press.
    • In this text, Gibbs presents and argues for a new view of lifespan socio-moral development based on his exploration of moral identity and other variables that account for prosocial behavior. 
  • Gulati, S., et al. (2019). Design, development and evaluation of a human-computer trust scale. Behaviour & Information Technology, 38(10), 1004-1015.
    • This paper argues that as more tasks are delegated to intelligent systems and user interactions these systems become increasingly complex, there must be a metric by which to quantify the amount of trust that a user is willing to place on such systems. The authors then present their own multi-dimensional scale to assess user trust in HCI. 
  • Green, B., & Hu, L. (2018). The myth in the methodology: Towards a recontextualization of fairness in machine learning. 35th International Conference on Machine Learning, Stockholme, Sweden. https://www.benzevgreen.com/wp-content/uploads/2019/02/18-icmldebates.pdf 
    • Many definitions of fairness in machine learning technologies are statistical and do not incorporate critical social and normative analyses. This work provides arguments for why these definitions fail to capture important fairness concerns that are situated in social, political, and moral debates. Finally, the paper argues that without change in how machine learning researchers work on fairness, there will be little impact on eventual justice.
  • Greene, J. D. (2013).* Moral tribes: Emotion, reason, and the gap between us and them. Penguin.
    • This book explores how our evolutionary nature that dictates a select group of others (Us) and seeks to fight off everyone else (Them) can coexist with our modern conditions of shared space that result in the moral lines that divide us becoming more salient and more puzzling.
  • Haidt, J. (2012).* The righteous mind: Why good people are divided by politics and religion. Vintage.
    • In this text, the author draws on research on moral psychology to argue that moral judgments arise not from reason but from gut feelings. Thus, given that different groups have different intuitions about right and wrong, this creates polarization within a population.
  • Kleinberg, J., & Raghavan, M. (2021). Algorithmic monoculture and social welfare. arXiv preprint arXiv:2101.05853
    • As algorithms are deployed more broadly, there are concerns about the decisions becoming homogeneous as multiple entities use the same algorithms. In this study, a theoretical analysis is provided to demonstrate why multiple entities using algorithms that are more accurate overall will lead to worse outcomes in society than not using the algorithm at all. The authors characterize this as an issue of algorithmic monoculture similar to issues of monoculture seen in agriculture. 
  • Liu, L. T., et al. (2018). Delayed impact of fair machine learning. Proceedings of Machine Learning Research, 80, 3150-3158. http://proceedings.mlr.press/v80/liu18c.html
    • This study empirically and theoretically explores the impact on long-term fairness of optimizing machine learning models for static fairness measures. These analyses demonstrate that optimizing static fairness measures does not guarantee fairness over time. In fact, it can negatively impact the long-term fairness of the system where optimizing without these objectives would not have this effect.
  • Martin, D., Jr., et al. (2020). Extending the machine learning abstraction boundary: A complex systems approach to incorporate societal context. arXiv preprint arXiv:2006.09663
    • This study examines three new tools for providing an in-depth understanding of the underlying societal context of developing and deploying machine learning algorithms. These three tools include: (1) a complex adaptive systems model that will aid both researchers and engineers with incorporating societal context in understanding machine learning fairness, (2) collaborative casual theory formation (CCTF) for developing a sociotechnical framework to combine different mental models and causal models for the problem at hand, (3) community-based system dynamics to practice CCTF throughout the machine learning pipeline.
  • P. Lin, K. Abney, & G. Bekey (Eds.). (2012).* Robot ethics: The ethical and social implications of robotics. MIT Press.
    • Starting with an overview of the issues and relevant ethical theories, the topics flow naturally from the possibility of programming robot ethics to the ethical use of military robots in war to legal and policy questions, including liability and privacy concerns. The book ends by examining the question of whether or not robots should be given moral consideration. 
  • Pinker, S. (2018).* Enlightenment now: The case for reason, science, humanism, and progress. Penguin.
    • Citing data that tracks social progress, Pinker argues that reason and science can enhance human flourishing and reliance on these logical and scientific principles is required in order to continue the trajectory of increasing health, prosperity, safety, peace, knowledge, and happiness. 
  • Singer, P. (2011).* The expanding circle: Ethics, evolution, and moral progress. Princeton University Press.
    • Drawing from the fields of philosophy and evolutionary psychology, Singer argues in this book that although altruism began as a genetically based drive to protect one’s kin and community members, it is not solely dictated by biology. Rather, altruism and by extension human ethics has developed as a result of our capacity for reasoning that leads to conscious ethical choices with an expanding circle of moral concern. 
  • Tomasello, M. (2016).* A natural history of human morality. Harvard University Press.
    • This text presents an account of the evolution of human moral psychology based on analysis and comparison of experimental data comparing great apes and human children. Tomasello presents an argument for our development based on two key evolutionary steps: the move towards collaboration, and the emergence of distinct cultural groups.  
  • Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.
    • This book aims to apply classical philosophical traditions of virtue ethics to challenges of a global technological society. The author argues that a moral framework based in virtue ethics represents the ideal guiding principles for contemporary society. 
  • van der Woerdt, S., & Haselager, P. (2016). Lack of effort or lack of ability? Robot failures and human perception of agency and responsibility. Benelux Conference on Artificial Intelligence, pp. 155-168.
    • This study explores how considering an agent’s actions as related to either effort or ability can have important consequences for attributions of responsibility. The study concludes that a robot displaying lack of effort significantly increases human attributions of agency and –to some extent- moral responsibility to the robot.
  • Vanderelst, D., & Winfield, A. (2018). The dark side of ethical robots. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 317-322).
    • This paper argues that the recent focus on building ethical robots also inevitably enables the construction of unethical robots, as the cognitive machinery utilized to make an ethical robot can be easily corrupted. In the face of these risks, the authors advocate for a hesitancy in embedding ethical decision making in real-world safety-critical robots. 
  • Wallach, W., & Allen, C. (2008).* Moral machines: Teaching robots right from wrong. Oxford University Press.
    • This book explores the problem of software governing autonomous systems being “ethically blind” in the sense that the decision‐making capabilities of such systems does not involve any explicit moral reasoning. The authors explore the necessity for robots to become capable of factoring ethical and moral considerations into their decision making as well as potential routes to achieve this. 
  • Wright, R. (2000).* Nonzero: The logic of human destiny. Pantheon.
    • In this book, Wright employs game theory and the logic of “zero-sum” and “non-zero-sum” games to argue against the conventional understanding that evolution and human history were aimless, presenting his view that evolution pushed humanity towards social and cultural complexity. 
  • Zhuang, S., & Hadfield-Menell, D. (2020). Consequences of misaligned AI. In H. Larochelle, M. Ranzato,  R. Hadsell , M. F. Balcan,  & H. Lin (Eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. NeurIPS. https://proceedings.neurips.cc/paper/2020/hash/b607ba543ad05417b8507ee86c54fcb7-Abstract.html
    • AI systems often operate on a partial understanding of the end user’s objectives, which are used to formalize a utility function and an optimization algorithm to learn the best behavior for achieving those objectives. This study analyzes the effect of incomplete information about the end user’s objectives on the overall utility. Finally, they theoretically demonstrate that allowing for interactivity between the user and the agent has greater benefits for designing the reward function.
  • Zuboff, S. (2019).* The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs, New York.
    • This book defines surveillance capitalism as the quest by powerful corporations to predict and control our behavior. It then argues that the total certainty for maximum profit promised by surveillance capitalism comes at the expense of democracy, freedom, and our human future.

Chapter 23. Social Failure Modes in Technology and the Ethics of AI: An Engineering Perspective (Jason Millar)⬆︎

  • Akrich, M. (1992). The De-Scription of Technical Objects. In W. E. Bijker and J. Law (Eds.), Shaping Technology/Building Society (pp. 205-224). MIT Press.
    • Akrich outlines how technical objects simultaneously embody and measure a set of relations between humans and non-humans and how they may generate both forms of knowledge and moral judgments.  Akrich argues that technical objects have the ability to script or prescribe behavior.  
  • Bicchieri, C. (2006). The Grammar of Society: The nature and dynamics of social norms.  Cambridge University Press.
    • The Grammar of Society examines social norms, such as fairness, cooperation, and reciprocity, in an effort to understand their nature and dynamics, the expectations that they generate, and how they evolve and change.  This book provides a definition of social norms which in turn enables Millar to investigate what it means for a social norm to be designed into an artifact.  
  • Bijker, W. E., Hughes, T. P., & Pinch, T. J. (Eds.). (1987). The social construction of technological systems: New directions in the sociology and history of technology. MIT Press.
    • The Social Construction of Technological Systems introduced a new method of inquiry—social construction of technology, or SCOT—that became a key part of the wider discipline of science and technology studies. Essays in this book tell stories about such varied technologies as thirteenth-century galleys, eighteenth-century cooking stoves, and twentieth-century missile systems. This book approaches the study of technology by giving equal weight to technical, social, economic, and political questions, and demonstrates the effects of the integration of empirics and theory.
  • Brandtzaeg, P. B., & Følstad, A. (2018). Chatbots: Changing user needs and motivations. Interactions, 25(5), 38-43. https://interactions.acm.org/archive/view/september-october-2018/chatbots#comments
    • This article discusses how a recent uptake in chatbots has revealed some of the current pitfalls of chatbot technology, and its needs going forward. The authors argue that chatbots are not designed well for their intended use cases and need improved designs that incorporate user needs and experiences. They also discuss a challenge for the human-computer interaction (HCI) community which is the unpredictable and highly variable inputs from users. 
  • Calo, R., Froomkin, A. M., & Kerr, I. (Eds.).* (2016). Robot law. Edward Elgar Publishing.
    • Robot Law collects papers by a diverse group of scholars focused on the larger consequences of the increasingly discernible future of robotics. It explores the increasing sophistication of robots and their widespread deployment into hospitals, public spaces, and battlefields. The book also explores how this requires rethinking of a wide variety of philosophical and public policy issues, including how this technology interacts with existing legal regimes.
  • Chiu, M., et al. (2018, November). Applying artificial intelligence for social good. McKinsey Global Institute.  https://www.mckinsey.com/featured-insights/artificial-intelligence/applying-artificial-intelligence-for-social-good
    • McKinsey & Company’s discussion paper covering the issues around AI. It offers a detailed analysis of how AI is not a silver bullet, but it could help tackle some of the world’s most challenging social problems. Topics discussed include: mapping AI cases to domains of social good; AI capabilities that can be used for social good; overcoming bottlenecks and identifying risks to be managed; and scaling up the use of AI for social good.
  • Eadicicco, L., et al. (2017). The 20 most successful technology failures of all time. Time Magazine. http://time.com/4704250/most-successful-technology-tech-failures-gadgets-flops-bombs-fails/ 
    • This is a list of failures that led to success or may yet still lead to something world-changing, hence the labeling of the items on the list as technology’s most successful failed products. Like an experiment gone awry, they can still teach us something about technology and how people want to use it.
  • Evans, R., & Collins, H. M. (2007).* Rethinking expertise. University of Chicago Press.
    • Rethinking Expertise offers a new perspective on the role of expertise in the practice of science and the public evaluation of technology. It asks the question: how can the public make use of science and technology before there is consensus in the scientific community? A Periodic Table of Expertises based on the idea of tacit knowledge—knowledge that we have but cannot explain is offered in order to determine how some expertises are used to judge others, how laypeople judge between experts, and how credentials are used to evaluate them. 
  • Felzmann, H., et al. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), https://doi.org/10.1177/2053951719860542
    • Felzmann et al. discuss the importance of transparency under the General Data Protection Regulation (GDPR). They present the pitfalls of the legal transparency requirements of GDPR and the lack of clarity on the benefits of increased transparency for end users. Finally, they propose a relational understanding of transparency focused on communication between the technology providers and users. 
  • Friedman, B., & Kahn, P. H., Jr. (2003).* Human Values, Ethics, and Design. In The human-computer interaction handbook (pp. 1177–1201). CRC Press. https://depts.washington.edu/hints/publications/Human_Values_Ethics_Design.pdf
    • This article reviews how the field of human-computer interaction (HCI) has addressed the following topics: how values become implicated in technological design; distinguishing usability from human values with ethical import; and review of the major HCI approaches to key human values relevant for design and special ethical responsibilities of HCI professionals. 
  • Greene, D., et al. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In T. X. Bui, Proceedings of the 52nd Hawaii international conference on system sciences (pp. 2122-2131). AIS Library.  https://aisel.aisnet.org/hicss-52/dsm/critical_and_ethical_studies/2/
    • Greene and colleagues examine several high-profile value statements on the ethical use of artificial intelligence and machine learning under the lens of design theory and the sociology of business ethics. They demonstrate that while these statements share framing from critical methodologies in science and technology studies, they are missing a focus on social justice and equity.
  • Hvistendahl, M. (2017). Inside China’s Vast New Experiment in Social Ranking. WIRED. https://www.wired.com/story/age-of-social-credit/
    • This article delves into how China is taking the idea of a credit score to the extreme.  By using big data to track and rank what its citizens do—including purchases, pastimes, and mistakes—China is able to take its practice of social engineering to a new level in the 21st century. In order to illustrate the impact of China’s use of technology on individual lives, Hvistendahl provides a detailed account of her and her friend’s experiences of living within this system over a period of several years. 
  • Kleeman, S. (2016). Here are the Microsoft Twitterbot’s craziest racist rants. Gizmodo. https://gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160
    • This article reviews Microsoft Twitterbot Tay’s racist rants in order to drive home the lesson learned that the development of AI needs to ensure that it incorporates social and cultural impacts of the technology before being deployed prematurely.  
  • Kudina, O., & Verbeek, P. P. (2019). Ethics from within: Google Glass, the Collingridge dilemma, and the mediated value of privacy. Science, Technology, & Human Values, 44(2), 291-314. https://doi.org/10.1177/0162243918793711
    • This study investigates how people characterize the value of privacy for Google Glass based on online discussions. The focus is on how the meaning of this value changed once the Google Glass was deployed, even in its limited fashion, compared to before the product was deployed. This is inspired by the “control dilemma” of Collingridge, a characterization of situations where it is easy to influence technological developments before they are deployed but are much harder afterward.
  • LaCroix, T., & Bengio, Y. (2019). Learning from learning machines: Optimisation, rules, and social norms. arXiv preprint arXiv:2001.00006
    • The authors present and explore the analogy between AI and economic entities. They demonstrate how findings in economics research may provide solutions towards AI safety and how findings in AI research can help inform economic policy. The authors stipulate that these results may demonstrate that understanding behaviors that both AI and economic policy aim to understand may be done better through norms. 
  • Latour, B., (1992). Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts.  In W. E. Bijker & J. Law (Eds.), Shaping Technology/Building Society (pp. 225-258).  MIT Press. 
    • Bruno Latour explores how artifacts can be deliberately designed to both replace human action and constrain and shape the actions of other humans. His study demonstrates how people can ‘‘act at a distance’’ through the technologies they create and implement and how, from a user’s perspective, technology can appear to determine or compel certain actions. Latour argues that we cannot understand how societies work without an understanding of how technologies shape our everyday lives. 
  • Latour, B. (1999).* Pandora’s hope: essays on the reality of science studies. Harvard University Press.
    • Pandora’s Hope is a collection of essays that investigate the relationship between humans and natural artifactual objects.  This book offers an argument for understanding the reality of science in practical terms.  Through case studies in the world of technology, Latour shows how the material and human world come together and are reciprocally transformed into items of scientific knowledge.  
  • Lin, P., Jenkins, R., Abney, K., Bekey, G. A. Eds. (2017).* Robot Ethics 2.0. Oxford University Press.
    • Robot Ethics 2.0 studies the ethical, legal, and policy impacts of robots which have been taking on morally important human tasks and decisions as well as creating new risks. This book focuses on issues related to autonomous cars as an important case study that cuts across diverse issues including psychological, legal, trust, physical, etc… 
  • Metz, R. (2015). Google Glass is dead; Long live smart glasses. Technology Review118(1), 79-82.  
    • This article argues that although Google’s head-worn computer is going nowhere, the technology is sure to march on because intriguing possibilities remain.  It evaluates the reasons for Google glass’s failure and investigates some potential uses for a smart glass device including serving as a memory aid and productivity enhancer.
  • Millar, J. (2015). Technology as moral proxy: Autonomy and paternalism by design. IEEE Technology and Society Magazine, 34(2), 47-55.
    • The author argues that technological artifacts can act as moral proxies for their user when they are answering moral questions. As part of this argument, the moral link between designers, artifacts, and users is discussed, particularly in the areas of healthcare, bioethics, and design. 
  • Pearson, C., & Delatte, N. (2006). Collapse of the Quebec Bridge, 1907. Journal of Performance of Constructed Facilities20(1), 84-91.
    • Collapse of the Quebec Bridge describes the grave implications of the failure of man-made artifacts as a result of physical defects not fully accounted for in their design.  This article outlines the collapse of the Quebec Bridge over the St. Lawrence River in 1907 where seventy-five workers were killed.  It discusses the investigation of the disaster and the finding that the main cause of the bridge’s failure was improper design by the consulting engineer.  
  • Pogue, D. (2013). Why Google Glass Is Creepy.  Scientific American. https://www.scientificamerican.com/article/why-google-glass-is-creepy/
    • This Scientific American article outlines the biggest obstacle to social acceptance of the new technology:  the smugness of people who wear Google Glass and the deep discomfort of everyone who does not. It drives home the message that even though wearable computer glasses let you record everything you see, good luck finding someone to talk to!
  • Purkayastha, S., et al. (2021). Failures hiding in success for artificial intelligence in radiology. Journal of the American College of Radiology, 18(3), 517-519. https://doi.org/10.1016/j.jacr.2020.11.008
    • The use of AI in radiology has become a popular and promising area of research. This study examines some of the failure modes that aren’t commonly reported and how these should be reported going forward. The three failure modes discussed include: hidden stratification or incomplete label sets, incomplete examination of false positives and false negatives, and comparing the AI to radiologists.
  • Timan, T., & Oudshoorn, N. (2012). Mobile cameras as new technologies of surveillance? How citizens experience the use of mobile cameras in public nightscapes. Surveillance & Society, 10(2), 167-181. https://doi.org/10.24908/ss.v10i2.4440
    • This article explores how individuals experience surveillance in public spaces via bottom-up devices such as OCTV and mobile cameras. The study was conducted with 32 people in the city center of Rotterdam at night. The study found that mobile cameras and OCTV are considered surveillance similar to that of CCTV.
  • Van den Hoven, J., Doorn, N., Swierstra, T., Koops, B.-J., Romijn, H. (Eds). (2014).* Responsible innovation 1: Innovative solutions for global issues. Springer.
    • Responsible Innovation 1 addresses the methodological issues involved in responsible innovation and provides an overview of recent applications of multidisciplinary research involving close collaboration between researchers in diverse fields such as ethics, social sciences, law, economics, applied science and engineering. This book delves into the ethical and societal aspects of new technologies and changes in technological systems.
  • Verbeek, P. P. (2006). Materializing morality: Design ethics and technological mediation. Science, Technology, & Human Values31(3), 361-380.
    • This article deploys the “script” concept, indicating how technologies prescribe human actions, in a normative setting.  This article explores the implications of the insight that engineers materialize morality by designing technologies that co-shape human actions.  The article augments the script concept by developing the notion of technological mediation and its impact on the design process and design ethics.  
  • Vincent, J. (2016). Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. The Verge. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist 
    • The Verge article outlines how it took less than 24 hours for Twitter to corrupt an innocent AI chatbot named Tay. Tay, by being a robot parrot with an internet connection starting repeating people’s misogynistic, racist and Donald Trump-like remarks back to users. This article raises serious questions about AI embodying the prejudices of society. 
  • Winner, L. (2010). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.
    • The Whale and the Reactor poses questions about the relationship between technical change and political power and explores the political, social and philosophical implications of technology.   This book demonstrates that technical decisions are political decisions, and they involve profound choices about power, liberty, order, and justice.  
  • Wolf, M. J., et al. (2017). Why we should have seen that coming: Comments on Microsoft’s tay “experiment,” and wider implications. ACM SIGCAS Computers and Society, 47(3), 54-64. https://doi.org/10.1145/3144592.3144598
    • Wolf et al. analyze Tay, the Microsoft chatbot, as a case study for a larger problem with AI software that interacts with the public. The authors focus on how developers are responsible for these interactions, advocating for additional ethical responsibilities for developers when their AI software will interact with the public or social media.
  • Zeeberg, A. (2020, January).  What we can learn about robots from Japan. BBC.  https://www.bbc.com/future/article/20191220-what-we-can-learn-about-robots-from-japan
    • This article discusses the contrast between the philosophical traditions of the West and the Japanese Shinto-based philosophical view that makes no categorical distinction between humans, animals, and objects such as robots. This contrast demonstrates that while the west tends to see robots and artificial intelligence as a threat, Japan’s view has led to its complex relationship with machines including a positive view of technology that is rooted in Japan’s socioeconomic, historical, religious and philosophical perspectives.  
  • Zuidhof, N., et al. (2019). A theoretical framework to study long-term use of smart eyewear. In R. Harle, K. Farrahi, & N. Lane (Eds.), Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (pp. 667-670). Association for Computing Machinery. https://doi.org/10.1145/3341162.3348382
    • This paper provides a theoretical framework that combines perspectives from philosophy, psychology, science and technology studies, and information systems, to study the benefits and harms of using smart eyewear. This framework is made up of four phases: (1) adoption, (2) influence, (3) re-applying, and (4) behavioral change. Together these phases help explain if the individual will use the technology and how they will interact with the technology, others, and the larger world around them. 

Chapter 24. A Human-Centred Approach to AI Ethics: A Perspective from Cognitive Science (Ron Chrisley)⬆︎

  • Alaieri, F., & Vellino, A. (2016). Ethical decision making in robots: Autonomy, trust, and responsibility. In International conference on social robotics (pp. 159-168). Springer. https://doi.org/10.1007/978-3-319-47437-3_16
    • The authors argue that in order to get people to trust autonomous robots, the ethical principles employed by these autonomous robots must be made transparent. 
  • Aroyo, A. M., et al. (2018). Trust and social engineering in human-robot interaction: Will a robot make you disclose sensitive information, conform to its recommendations or gamble? IEEE Robotics and Automation Letters3(4), 3701-3708. https://doi.org/10.1109/LRA.2018.2856272
    • This research study examines how robots could be used for social engineering. The researchers found that people do build trust with robots, which can lead to the voluntary disclosure of private information. 
  • Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition181, 21-34. https://doi.org/10.1016/j.cognition.2018.08.003
    • This article examines nine studies that suggest that humans do not want autonomous machines to make moral decisions. Bigman and Gray argue that this aversion to machine moral decision-making will prove challenging to eliminate as designers seek to employ machines in medicine, law, transportation, and defence.
  • Broadbent, E. (2017). Interactions with robots: The truths we reveal about ourselves. Annual Review of Psychology, 68, 627-652. https://doi.org/10.1146/annurev-psych-010416-043958
    • This article examines human-robot relations from the perspective of cognitive science. Broadbent argues that there is a need to study human feelings towards robots and argues that this study will reveal insights into human psychology, such as human tendency to have an uncanny feeling towards robotic machines. 
  • Darling, K. (2015). “Who’s Johnny?” Anthropomorphic framing in human-robot interaction, integration, and policy. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Oxford University Press. 
    • This paper considers the benefits and drawbacks of anthropomorphized robots. It argues that in some cases, anthropomorphic framing is helpful as it increases the functionality of the technology. However, the paper argues that emotional relationships between humans and robots could make people vulnerable to emotional manipulation. 
  • Datteri, E. (2013). Predicting the long-term effects of human-robot interaction: A reflection on responsibility in medical robotics. Science and Engineering Ethics, 19(1), 139-160.
    • This paper considers two existing robots: one named Da Vinci, which is used for medical surgery; and another named Lokomat, which is used for walking rehabilitation. The author claims that issues of responsibility regarding injury are mostly problems that can be overcome by better engineering and more training. This raises questions about what kind of harm thresholds can be tolerated as ethical dilemmas expand beyond assigning blame.
  • de Graaf, M. M. A. (2016). An ethical evaluation of human–robot relationships. International Journal of Social Robotics, 8(4), 589-598. https://doi.org/10.1007/s12369-016-0368-5
    • De Graff discusses the ethical considerations of human-robot relationships, in light if and how these relationships could contribute to the good life and argues that research of human social interaction with robots is needed to flesh out ethical, societal, and legal perspectives, and to design and introduce responsible robots. 
  • Fossa, F. (2018). Artificial moral agents: Moral mentors or sensible tools? Ethics and Information Technology20(2), 115-126. https://doi.org/10.1007/s10676-018-9451-y
    • This paper analyzes how the concept of an artificial moral agent (AMA) impacts human self-understanding of themselves as moral agents. Fosse presents the Continuity Approach and contrary Discontinuity approach. The Continuity Approach argues that AMAs and humans should be considered homogenous moral entities. The Discontinuity Approach argues that there is an important essential difference between humans and AMAs. Fosse argues that the Discontinuity Approach better encapsulates the definition of AMAs, how we should deal with the moral tensions they cause, and the difference between machine ethics and moral philosophy. 
  • Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411-437.
    • This paper argues that moral beliefs vary across populations and therefore aligning AI values with human values would vary depending on context. It analyzes what alignment means in a deep sense and proposes ways in which fair principles could be arrived at by considering existing moral frameworks such as the veil of ignorance and social choice theory.
  • Gaudiello, I., et al. (2016). Trust as an indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Computers in Human Behavior, 61, 633-655. https://doi.org/10.1016/j.chb.2016.03.057
    • The author presents an experiment between 56 participants and a robot called iCub, which investigated whether trust in a robot’s function was a prerequisite for social acceptance and to what extent social features like participant desire to control affected trust in iCub. The study found that participants were more likely to agree with iCub’s decisions in functional tasks rather than social ones. They conclude that functional ability is not a prerequisite for trust in social ability. 
  • Kahn, P. H., et al. (2006). What is a human? Toward psychological benchmarks in the field of human-robot interaction. In ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication (pp. 364-371). Institute of Electrical and Electronics Engineers.
    • This paper introduces benchmarks for capturing fundamental aspects of human life, with the goal of transferring these characteristics to robots. Some of the principles considered, such as moral accountability or reciprocity, can facilitate ethical behavior in AI systems.
  • Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529
    • This report suggests that the pace at which AI advances, as well as the difficulty in understanding increasingly complex intelligent agents,  heighten the need for anticipating and creating response plans to address potentially harmful effects of this technology. It gives practical advice for the British public sector regarding the need for AI interpretability, evidence-based reasoning, and moral justifiability in order to promote safe and ethical AI.
  • Malle, B. F., et al. (2015).* Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In 10th ACM/IEEE International Conference on Human-Robot Interaction (pp. 117-124).
    • The authors argue that explicit ethical mechanisms must be incorporated as autonomous robots will inevitably end up in situations wherein an ethical choice must be made. They outline several requirements for these ethical mechanisms. 
  • Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology18(4), 243-256. https://doi.org/10.1007/s10676-015-9367-8
    • This article examines the connection between robot ethics and machine morality, arguing that robots can be designed with moral characteristics similar to those of humans. Consequently, these robots can contribute to society as ethically competent humans do. 
  • Malle, B. F., & Scheutz, M. (2019). Learning how to behave. In O. Bendel (Ed.), Handbuch Maschinenethik (pp. 255-278). Springer. https://doi.org/10.1007/978-3-658-17483-5_17
    • Malle and Scheutz present a framework for developing robotic moral competence, composed of five features: two constituents (moral norms and moral vocabulary), and three activities (moral judgement, moral action and moral communication). 
  • Moor, James. (2009).* Four kinds of ethical robots. Philosophy Now, 72(12), 12-14.
    • Moor argues that there are at least four distinct types of ethical robots. First, ethical impact agents, which perform actions that have ethical consequences regardless of the machine’s intention. Second, implicit ethical agents are designed to have built in ethical actions. Third, explicit ethical agents can make ethical determinations themselves. Fourth, full ethical agents can make ethical determinations, but also have features associated with human ethical agents, including consciousness, intentionality, and free-will.
  • Riek, L., & Howard, D. (2014). A code of ethics for the human-robot interaction profession. In Proceedings of We Robot 2014. https://robots.law.miami.edu/2014/wp-content/uploads/2014/03/a-code-of-ethics-for-the-human-robot-interaction-profession-riek-howard.pdf
    • This article argues that the rights and protections present in human-to-human interaction should also exist for human-to-robot interaction. It outlines a prime directive of principles that ensure human dignity, respect for human frailty, predictability in robot behavior, and diverse morphologies.
  • Russell, S., et al. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105-114.
    • This paper outlines the many potential benefits of AI, while also making the reader aware of the dangers presented by this technology, whose bounds we still do not understand. It provides guidance on how to build safe and robust AI models. 
  • Sarathy, V., et al. (2017). Learning behavioral norms in uncertain and changing contexts. In 8th IEEE International Conference on Cognitive Infocommunications (pp. 301-306).
    • This article presents the problem of presenting norms to algorithms, in light of the fact that humans are often uncertain and vague when it comes to moral norms. Using deontic logic, Dempster Shafer Theory, and a machine learning algorithm that teaches and AI norms using uncertain human data, the authors demonstrate a novel capacity for AIs to learn about morality, using context clues to provide nuance.
  • Scheutz, M., & Malle, B. F. (2014).* “Think and do the right thing”—A Plea for morally competent autonomous robots. In 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering (pp. 1-4).
    • Scheutz and Malle argue that it is vital to incorporate explicit ethical mechanisms that enable moral virtue in autonomous robots, in light of their frequent use in ethically charged scenarios. 
  • Scheutz, M., et al. (2015). Towards morally sensitive action selection for autonomous social robots. In 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (pp. 492-497). https://doi.org/10.1109/ROMAN.2015.7333661
    • The authors argue that autonomous social robots must be taught to anticipate norm violations and seek to prevent them. If such situations cannot be prevented in a given context, robots must be able to justify their action. The authors present an action execution system as a potential solution to this problem. 
  • Scheutz, M. (2017). The case for explicit ethical agents. AI Magazine38(4), 57-64. https://doi.org/10.1609/aimag.v38i4.2746
    • Scheultz presents his case for the development of what Moor calls explicit ethical agents. He argues that although machine ethics is a growing field, more work needs to be done to create cognitive architecture that can judge situations based on morality, for both humans and robots.
  • Stange, S., & Kopp, S. (2020). Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 619-627). https://doi.org/10.1145/3319502.3374802
    • This paper investigates whether or not a robot’s ability to self-explain its own behaviour affects user perception of that behaviour. Stange & Kopp found that all types of explanation strategies increased understanding and acceptance of robot behaviour.
  • Tavani, H. T. (2018). Can social robots qualify for moral consideration? Reframing the question about robot rights. Information9(4), 73. https://doi.org/10.3390/info9040073
    • Tavani suggests that current debates on whether robots can have rights are limited because they do not explicitly define what robots would equality and what specific rights are at stake. She argues that the question of whether robots should have rights should be framed as asking whether some social robots qualify for moral consideration as moral patients. Tavani argues that they should. 
  • Torrence, S., & Chrisley, R. Modelling consciousness-dependent expertise in machine medical moral agents. (2015).* In P. van Rysewyk & M. Pontier (Eds.), Machine medical ethics (pp. 291-316). Springer International Publishing.
    • This article examines the limitations of current AI designs, stating that current models for medical AI systems fail to account for machine consciousness, thereby limiting their ethical functionality. The authors argue machine consciousness plays a vital role in moral decision-making, and thus it would be prudent for AI designers to think about consciousness when creating these machines.
  • van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics25(3), 719-735. https://doi.org/10.1007/s11948-018-0030-8
    • This article examines issues relating to the development of artificial moral agents (AMAs) and argues that ethicists have yet to provide good arguments for the development of such machines. The authors argue that the, development of AMAs should not continue until such arguments are given. 
  • Yampolskiy, R. V. (2013). Artificial intelligence safety engineering: Why machine ethics is a wrong approach. In V. Muller (Ed.), Philosophy and Theory of Artificial Intelligence (pp. 389-396). Springer.
    • The author argues that giving machines rights or allowing them to make ethical decisions should not be the top priority. Instead, the scientific community should focus on formally verifiable systems that are demonstrably safe in the presence of self-improvement because AI is a dynamic technology.
  • Yu, H., et al. (2018). Building ethics into artificial intelligence. In J. Lang (Ed.), Proceedings of the 27th International Joint Conference on Artificial Intelligence (pp. 5527-5523). AAAI Press.
    • This article conducts a thorough analysis of existing discussions about ethical decision-making by AI. Four main topics are investigated, including ethical dilemmas such as trolley problems involving autonomous vehicles and cases where AI can influence human behavior and potentially decrease autonomy. This analysis paves the way for a discussion about how to integrate AI systems into society.
  • Ziemke, T. (2008). On the role of emotion in biological and robotic autonomy. BioSystems91(2), 401-408. https://doi.org/10.1016/j.biosystems.2007.05.015
    • This article discusses the difference between the autonomy of biological beings and the autonomy of robots from the perspective of cognitive science. 

Chapter 25. Integrating Ethical Values and Economic Value to Steer Progress in Artificial Intelligence (Anton Korinek)⬆︎

  • Acemoglu, D., & Restrepo, P. (2019).* The wrong kind of AI? Artificial intelligence and the future of labor demand (NBER Working Paper 25682). National Bureau of Economic Research. https://www.nber.org/papers/w25682
    • This paper argues that recent technological change has been biased towards automation, with insufficient focus on creating new tasks where labor can be productively employed. The paper suggests that the consequences of this choice have been stagnating labor demand, declining labor share in national income, rising inequality, and lower productivity growth. The paper argues that the current tendency to develop AI in the direction of further automation could lead to missing out on the promise of the “right” kind of AI with better economic and social outcomes.
  • Agrawal, A., et al. (2019). Economic policy for artificial intelligence. Innovation Policy and the Economy19(1), 139-159.
    • This article argues that policy will influence the impact of artificial intelligence on society in two key dimensions: diffusion and consequences. First, in addition to subsidies and intellectual property (IP) policy that will influence the diffusion of AI in ways similar to their effect on other technologies, the article presents three policy categories—privacy, trade, and liability—as uniquely salient in their influence on the diffusion patterns of AI. Second, the article suggests labor and antitrust policies will influence the consequences of AI in terms of employment, inequality, and competition.
  • Autor, D. H., et al. (2003). The skill content of recent technological change: An empirical exploration. The Quarterly Journal of Economics, 118(4), 1279-1333. https://doi.org/10.1162/003355303322552801
    • The authors perform an empirical measurement of how the rapid adoption of computers in the workplace impacted labor between 1960 and 1998. They argue that human performance of analytic routine tasks, such as calculation, and manual routine tasks, such as part assembly, can be significantly substituted by computers. Computers also strongly complement the human performance of nonroutine analytic tasks, such as medical diagnosis. The authors use econometric models to demonstrate that substitution and complementarity have driven changes in labor demand as computer capital became more affordable. 
  • Autor, D. (2015).* Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30.
    • This article argues that the polarization of the labor market is unlikely to continue very far into the future, reflecting on how recent and future advances in artificial intelligence and robotics should shape our thinking about the likely trajectory of occupational change and employment growth. It argues that the interplay between machine and human comparative advantage allows computers to substitute for workers in performing routine, codifiable tasks while amplifying the comparative advantage of workers in supplying problem-solving skills, adaptability, and creativity.
  • Bolton, C., et al. (2018). The power of human-machine collaboration: Artificial intelligence, business automation, and the smart economy. Economics, Management, and Financial Markets13(4), 51-56.
    • This article reviews and advances existing literature concerning the power of human–machine collaboration. Using and replicating data from Accenture, BBC, CellStrat, eMarketer, Frontier Economics, MIT Research Report, Morar Consulting, PwC, and Squiz, the authors perform analyses and makes estimates regarding the impact of artificial intelligence (AI) on industry growth including: real annual GVA growth by 2035 (%), how AI could change the job market: estimated net job creation by industry sector (2017–2037), reasons given by global companies for AI adoption, and leading advantages of AI for international organizations.
  • Bostrom, N. (2014).* Superintelligence: Paths, dangers, strategies. Oxford University Press.  
    • This book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. The book argues that sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.
  • Brynjolfsson, E., & McAfee, A. (2015).* The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W.W. Norton.
    • This book identifies the best strategies for survival and offers a new path to prosperity in the midst of unprecedented technological and economic change. The authors’ suggestions include revamping education so that it prepares people for the next economy instead of the last one, designing new collaborations that pair brute processing power with human ingenuity, and embracing policies that make sense in a radically transformed landscape.
  • Brynjolfsson, E., et al. (2019). Does machine translation affect international trade? Evidence from a large digital platform. Management Science65(12), 5449-5460.
    • Using data from a digital platform, the authors study machine translation, finding that the introduction of a new machine translation system has significantly increased international trade on this platform, increasing exports by 10.9%. Furthermore, the study found that heterogeneous treatment effects are consistent with a substantial reduction in translation costs. The authors argue that the results of this study provide causal evidence that language barriers significantly hinder trade and that AI has already begun to improve economic efficiency in at least one domain.
  • Ernst, E., et al. (2019). Economics of artificial intelligence: Implications for the future of work. IZA Journal of Labor Policy9(1), 7-72.
    • This paper discusses the rationales for fears of widespread job loss due to artificial intelligence, comparing this technology to previous waves of automation. The paper argues that large opportunities in terms of increases in productivity can ensue, including for developing countries, given the vastly reduced costs of capital that some applications have demonstrated and the potential for productivity increases, especially among the low-skilled. To address the risk of increasing inequality, the paper calls for new forms of regulation for the digital economy. 
  • Floridi, L. (2016). Should we be afraid of AI? Aeon Essays. https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible
    • In this opinion piece, Floridi argues against fears that AI will achieve superintelligence. He claims that AI is unable to generalize beyond mundane and trivial tasks, and points to its continued inability to pass simple Turing Tests as evidence against the AI singularity. He further cautions that placing too much emphasis on superintelligence distracts from concrete social issues both exacerbated and alleviated by AI, such as economic inequality.
  • Frey, C. B. (2019). The technology trap: Capital, labor, and power in the age of automation. Princeton University Press.
    • From the Industrial Revolution to the age of artificial intelligence, this book examines the history of technological progress and how it has radically shifted the distribution of economic and political power among society’s members. Just as the Industrial Revolution eventually brought about extraordinary benefits for society, this book argues that artificial intelligence systems have the potential to do the same. 
  • Grace, K., et al. (2018). When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729-754. https://doi.org/10.1613/jair.1.11222
    • This article conducts a large-scale survey of AI and machine-learning experts to develop estimates of when AI development will reach key milestones, such as the replacement of humans in jobs demanding higher levels of skill and expertise. It finds that researchers believe AI will be capable of writing bestselling books and working as surgeons by 2053, and even potentially automating all human jobs within 120 years. However, individual estimates vary substantially, and few experts believe that superintelligence will be achieved in the near future.
  • Graetz, G., & Michaels, G. (2018). Robots at work. Review of Economics and Statistics, 100(5), 753-768. https://doi.org/10.1162/rest_a_00754
    • In this study, the authors investigate the economic implications of the widespread adoption of industrial robots by analyzing data from 17 developed countries between 1993 and 2007. They find that the increased use of robotics accounts for 15% of the productivity growth in these economies over the time period. Furthermore, there is evidence that robot densification is associated with higher average wages and no significant changes in working hours on aggregate. However, when separately analyzing workers of different skill levels, the negative impact on low-skilled workers was actually offset by the gains received by medium- and high-skilled workers.
  • Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
    • This book explores the origins and ramifications of the “ghost work” employed by Big Tech corporations. In order to support the operation of their vast online platforms and services, these corporations use a hidden labor force to perform crowdsourced microtasks such as data labeling, content moderation, and service fine-tuning. Employment through ghost work, the authors argue, arises paradoxically out of the development of AI-based automation that otherwise threatens traditional labor. In turn, growing concerns about this new underclass of workers need to be addressed, such as accountability, trust, and insufficient regulation of on-demand work.
  • Korinek, A. (2019).* The rise of artificially intelligent agents. University of Virginia.
    • This paper develops an economic framework that describes humans and Artificially Intelligent Agents (AIA) symmetrically as goal-oriented entities that each (i) absorb scarce resources, (ii) supply their factor services to the economy, (iii) exhibit defined behavior, and (iv) are subject to specified laws of motion. After introducing a resource allocation frontier that captures the distribution of resources between humans and machines, the paper describes several mechanisms that may provide AIAs with autonomous control over resources, both within and outside of our human system of property rights. The paper argues that in the limit case of an AIA-only economy, AIAs both produce and absorb large quantities of output without any role for humans, rejecting the fallacy that human demand is necessary to support economic activity. 
  • Korinek, A., & Stiglitz, J. (2019).* Artificial intelligence and its implications for income distribution and unemployment. In A. Agrawal, J. Gans, & A. Goldfarb, (Eds.), The economics of artificial intelligence, (pp. 349–390). NBER and University of Chicago Press.
    • This paper provides an overview of economic issues associated with artificial intelligence by discussing the general conditions under which these technologies may lead to a Pareto improvement, delineating the two main channels through which inequality is affected, and providing several simple economic models to describe how policy can counter these effects. Finally, the paper describes the two main channels through which technological progress may lead to technological unemployment and speculates on how technologies to create super-human levels of intelligence may affect inequality.
  • Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46-60. https://doi.org/10.1016/j.futures.2017.03.006
    • This article describes the rapid advances made in the development of AI technology and draws parallels to the industrial and digital revolutions over the preceding two centuries. The author analyzes potential outcomes characterized by four viewpoints of AI research: optimism, pessimism, pragmatism, and skepticism. Based on these comparisons, the author provides predictions for whether individual Big Tech firms will succeed, and on how the labor and economic landscape will be changed by increasing automation.
  • Naidu, S., et al. (2019).* Economics for inclusive prosperity: An introduction. Economists for Inclusive Prosperity. http://www.econfip.org
    • This article argues that political institutions in the United States favor higher-income individuals over lower-income individuals and ethnic majorities over ethnic minorities and describes how this is accomplished through a myriad of policies that impact who votes, allow for differential influence and access by the wealthy, structure voting districts to dilute the impacts of under-represented voters and allow for the oversized influence of pro-business owner ideas through media and membership organizations. 
  • Raghavan, M., et al. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, (pp. 469-481). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372828
    • In response to the increasing amount of public scrutiny on algorithmic hiring in the private sector, this paper conducts a qualitative survey of vendors providing AI-enhanced solutions for employee assessment. It takes note of the features analyzed by the vendors, how the vendors claim to have validated their results, and whether fairness is considered. The authors conclude with policy and technical recommendations for ensuring more effective, appropriate, and fair algorithmic hiring practices.
  • Sen, A. (1987).* On ethics and economics. Blackwell Publishing.
    • This book argues that welfare economics can be enriched by paying more explicit attention to ethics, and that modern ethical studies can also benefit from closer contact with economies. It argues further that even predictive and descriptive economics can be helped by making more room for welfare-economic considerations in the explanation of behaviour.
  • Sunstein, C. R. (2015). The ethics of nudging. Yale Journal on Regulation, 32(2), 413-450. https://digitalcommons.law.yale.edu/yjreg/vol32/iss2/6
    • In this article, Sunstein defends behavioral nudge theory against criticism based on its apparent threats to human agency. He argues first that nudges are inevitable and cannot be avoided, and second, that they are highly context-sensitive and cannot be considered universally unethical. Arguing they are a form of “libertarian paternalism,” Sunstein claims that nudges promote better welfare, for example by guiding people towards healthier life choices. He suggests that nudges also preserve autonomy by enabling people to make better, informed decisions without explicitly constraining the decision-making process.
  • Tegmark, M. (2017).* Life 3.0: Being human in the age of artificial intelligence. Knopf.
    • This book discusses Artificial Intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology, and combinations thereof.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
    • The author argues that the growing financial dominance of Big Tech encompasses a new form of capitalism founded on surveillance. While industrial capitalism focuses on the exploitation of human labor and natural resources, “surveillance capitalism” benefits from the monetization of behavioral data. This data is captured, analyzed, and optimized in an “instrumentarian” fashion for profit using a global, computational infrastructure. This argument is developed through a historical analysis of the use of this infrastructure, or the “Big Other”, by both government agencies and Silicon Valley giants like Google and Facebook. The author argues that surveillance capitalism poses a fundamental threat to democratic values and institutions.

Chapter 26. Fairness Criteria through the Lens of Directed Acyclic Graphs: A Statistical Modeling Perspective (Benjamin R. Baer, Daniel E. Gilbert, and Martin T. Wells)⬆︎

  • Angwin, J., et al. (2016).* Machine bias: There’s software used across the country to predict future criminals: And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    • This investigation by Pro-Publica revealed that risk scores for reoffending created by artificial intelligence algorithms and used in bail decisions in the United States are often unreliable and inaccurate. The investigation further found that these scores disproportionately find Black Americans to be at higher risk, alleging that the algorithms used to produce the scores are racially biased.
  • Baeza-Yates, R., & Goel, S. (2019). Designing Equitable Algorithms for the Web. In Companion Proceedings of The 2019 World Wide Web Conference (pp. 1296-1296).
    • This paper provides an introduction to fair machine learning, beginning with a general overview of algorithmic fairness, and then discussing these issues specifically in the context of the Web. To illustrate the complications of current definitions of fairness, the article relies on a variety of classical and modern ideas from statistics, economics, and legal theory. The article discusses the equity of machine learning algorithms in the specific context of the Web, exposing different sources for bias and how they impact fairness, include not only data bias but also biases that are produced by data sampling, the algorithms per-se, user interaction and feedback loops that result from user personalization and content creation. 
  • Bareinboim, E., et al. (2014). Recovering from selection bias in causal and statistical inference. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (pp. 2410-2416).
    • This paper provides complete graphical and algorithmic conditions for recovering conditional probabilities from selection biased data. The paper also provides graphical conditions for recoverability when unbiased data is available over a subset of the variables. Finally, the paper provides a graphical condition that generalizes the backdoor criterion and serves to recover causal effects when the data is collected under preferential selection.
  • Barocas, S., et al. (2018).* Fairness and machine learning. http://www.fairmlbook.org
    • This online textbook reviews the practice of machine learning, highlighting ethical challenges and presenting approaches to mitigate them. Specifically, the book focuses on the issue of fairness considering both technical interventions and deeper questions concerning power and accountability in machine learning. 
  • Bellamy, R. K., et al. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943
    • This paper introduces a new open-source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license. The main objectives of this toolkit are to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms. 
  • Chouldechova, A. (2017).* Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
    • This paper discusses a fairness criterion originating in the field of educational and psychological testing that has recently been applied to assess the fairness of recidivism prediction instruments. The authors demonstrate how adherence to the criterion may lead to considerable disparate impact when recidivism prevalence differs across groups.
  • Corbett-Davies, S., et al. (2017). Algorithmic decision-making and the cost of fairness. In S. Matwin, S. Yu, & F. Farooq (Eds.), Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 797–806). Association for Computing Machinery. https://doi.org/10.1145/3097983.3098095
    • This paper discusses algorithmic fairness as a constrained optimization problem, maximizing model utility while satisfying the criterion of formal fairness. The authors focus on the context of algorithmic decision-making in pretrial release determinations. They show that the optimal unconstrained model treats all defendants equally and compare this to optimal models that are constrained by statistical parity, predictive parity, and conditional statistical parity. They discuss the trade-off in model utility under these constraints. The paper examines data from Broward County, Florida, and discusses the practical tension between optimizing for public safety, which yields models with significant racial disparities and optimizing for fairness, which means releasing higher-risk defendants. 
  • Corbett-Davies, S., & Goel, S. (2018).* The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv:1808.00023
    • This paper argues that three prominent definitions of fairness used in machine learning, anti-classification, classification parity, and calibration, each have significant statistical issues. In contrast to these strategies, the authors argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. 
  • Dwork, C., et al. (2012). Fairness through awareness. In S. Goldwasser, Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (pp. 214–226). Association for Computing Machinery. https://doi.org/10.1145/2090236.2090255
    • This paper studies fairness in classification and discusses the goal of preventing classifier discrimination against individuals based on membership in a sensitive group while maintaining classifier utility. The framework proposes a metric for individual similarity under a classification task. The paper presents a learning algorithm for maximizing classifier utility under various fairness constraints. The authors adapt this algorithm to a fairness model that guarantees statistical parity. They relate their proposed fairness framework to tools developed for differential privacy.
  • Dwork, C., et al. (2020). Abstracting fairness: Oracles, metrics, and interpretability. arXiv preprint arXiv:2004.01840
    • This paper examines what can be learned from a fairness oracle equipped with an underlying understanding of “true” fairness. The results have implications for interpretability—a highly desired but poorly defined property of classification systems that endeavors to permit a human arbiter to reject classifiers deemed to be “unfair” or illegitimately derived.
  • Flores, A. W., et al. (2016).* False positives, false negatives, and false analyses: A rejoinder to “Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks”. Federal Probation80(2), 38-46.
    • This article argues that a ProPublica report exposing racial bias in COMPAS, a risk assessment tool used in the criminal justice system, was based on faulty statistics and data analysis. The authors provide their own analysis of the data used in the ProPublica piece to argue that the COMPAS tool is not racially biased.
  • Hardt, M., et al. (2016).* Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems 29 (pp. 3315–3323). 
    • This article proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, this paper shows how to optimally adjust any learned predictor so as to remove discrimination according to the authors’ definition. The authors argue that this framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision-maker, who can respond by improving the classification accuracy.
  • Herington, J. (2020). Measuring fairness in an unfair world. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 286-292).
    • This paper argues that the three most popular families of measures – unconditional independence, target-conditional independence and classification-conditional independence – make assumptions that are unsustainable in the context of an unjust world. The paper argues that implicit idealizations in these measures fall apart in the context of historical injustice, ongoing unmodeled oppression, and the permissibility of using sensitive attributes to rectify injustice. The paper puts forward an alternative framework for measuring fairness in the context of existing injustice: distributive fairness.
  • Holmes, N. (2003). Artificial intelligence: arrogance or ignorance? Computer, 36(11), 120-119. https://doi.org/10.1109/MC.2003.1244544
    • This paper argues for the term “algoristics” as a highly suitable replacement for artificial intelligence, arguing that it is more historically correct. The author argues that placing this renamed field alongside statistics and logistics, as a branch of mathematics, would benefit the computing profession greatly. 
  • Kilbertus, N., et al. (2017).* Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems, 30, 656–666.
    • Going beyond observational criteria, this article frames the problem of discrimination based on protected attributes in the language of causal reasoning. Through the lens of causality, this article articulates why and when observational criteria fail, exposes previously ignored subtleties and why they are fundamental to the problem, and puts forward natural causal non-discrimination criteria, and develop algorithms that satisfy them.
  • Kleinberg, J., et al. (2017). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 
    • This paper formalizes three fairness conditions that lie at the heart of recent debates and argues that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. The paper’s results suggest some of the ways in which key notions of fairness are incompatible with each other and provide a framework for thinking about the trade-offs between them.
  • Kusner, M. J., et al. (2017). Counterfactual fairness. In I. Guyon & U. V. Luxburg (Eds.), Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4069-4079). https://papers.nips.cc/paper/2017/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html
    • This paper discusses the criterion of fairness through notation and methods from causal inference. The framework that the authors develop considers the model outcome for each individual who may be correlated with sensitive attributes. They consider a model to be counterfactually fair if the outcome for each individual is not caused by their sensitive attribute. The Total Effect criterion presented in Pearl’s notation for causal inference is a special case of their proposed approach to counterfactual fairness.
  • Liu, L. T., et al. (2018). Delayed impact of fair machine learning. arXiv preprint arXiv:1803.04383.
    • This article presents a study of how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. The results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.
  • Mitchell, S., et al. (2018). Prediction-based decisions and fairness: A catalogue of choices, assumptions, and definitions. arXiv:1811.07867
    • This paper explicates the various choices and assumptions made—often implicitly—to justify the use of prediction-based decisions. The paper demonstrates how such choices and assumptions can raise concerns about fairness and presents a notationally consistent catalog of fairness definitions from the ML literature. The paper offers a concise reference for thinking through the choices, assumptions, and fairness considerations of prediction-based decision systems.
  • Overdorf, R., et al. (2018). Questioning the assumptions behind fairness solutions. arXiv preprint arXiv:1811.11293.
    • This paper revisits assumptions made about the service providers in fairness solutions. Namely, that service providers have (i) the incentives or (ii) the means to mitigate optimization externalities. Moreover, the paper argues that the environmental impact of these systems suggests that we need (iii) novel frameworks that consider systems other than algorithmic decision-making and recommender systems, and (iv) solutions that go beyond removing related algorithmic biases. Going forward, the authors propose Protective Optimization Technologies that enable optimization subjects to defend against negative consequences of optimization systems.
  • Pearl, J. (1993). Graphical models, causality, and intervention. Statistical Science, 8, 266–269.
    • This paper provides an early connection between Directed Acyclic Graphical models and causality. The paper gives a bias free estimation of causal effects and introduces the back-door criterion for reasoning about the confounding relationships in graphical models.
  • Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge University Press.
    • This book details a framework for reasoning about causal models. This work consolidates various theoretical results into a rigorous mathematical treatment, providing the foundation for later developments in the field of causal reasoning.
  • Pleiss, Geoff, et al. (2017). On fairness and calibration. In Advances in Neural Information Processing Systems 30 (pp. 5680–5689). 
    • This paper investigates the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. The authors show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These findings, which extend and generalize existing results, are empirically confirmed on several datasets.
  • Rzepka, R., & Araki, K. (2005). What statistics could do for ethics? The idea of common sense processing based safety valve. In AAAI Fall Symposium on Machine Ethics, Technical Report FS-05-06 85-87.
    • This paper introduces an approach to the ethical issue of machine intelligence developed through experiments with automatic common-sense retrieval and affective computing for open-domain talking systems. The authors use automatic common-sense knowledge retrieval which allows to calculate the common consequences of actions and the average emotional load of those consequences. 
  • Zemel, R., et al. (2013). Learning fair representations. Proceedings of the 30th International Conference on Machine Learning, 28(3), 325-333.
    • This paper proposes a learning algorithm for classification subject to a group and individual fairness criteria. They formulate the problem as an optimization of two competing goals to encode the data, while simultaneously obfuscating information about individual membership in protected groups. 
  • Zhang, J., & Bareinboim, E. (2018). Fairness in decision-making—the causal explanation formula. In Thirty-Second AAAI Conference on Artificial Intelligence. AAAI Publications.
  • This paper introduces three new fine-grained measures of transmission of change from stimulus to effect, which we called counterfactual direct (Ctf-DE), indirect (Ctf-IE), and spurious (Ctf-SE) effects. The authors apply these measures to various discrimination analysis tasks and run extensive simulations, including detection, evaluation, and optimization of decision-making under fairness constraints. The paper concludes by studying the trade-off between different types of fairness criteria (outcome and procedural) and provides a quantitative approach to policy implementation and the design of fair AI systems.

Chapter 27. Automating Origination: Perspectives from the Humanities (Avery Slater)⬆︎

  • Andersson, A. E. (2009). Economics of creativity. In Karlsson, C., Cheshire, P. & Andersson, A. E. (Eds.), New directions in regional economic development (pp. 79-95). Springer.
    • This paper explores the past effects of the division of labor system as posited by Adam Smith and the recent rise in creativity that goes against this system. The author argues that as specialization progressed, people were confined to a few very simple operations and this should have limited creativity. However, in recent times there has been a growth in creative industries such as research and development, scientific research, and the arts.
  • Ariza, C. (2009). The interrogator as critic: The Turing test and the evaluation of generative music systems. Computer Music Journal, 33(2), 48-70.
    • This article explores the relationship between algorithmically generated music systems and the human ability to detect their generated nature. The authors argue that listening tests to detect this distinction do not constitute true Turing Tests. 
  • Boden, M. A. (1990).* The creative mind: Myths and mechanisms. Weidenfeld. Abacus & Basic Books.
    • This book explores human creativity and presents a scientific framework for understanding how creativity arose and how it is defined. 
  • Boden, M. (Ed.). (1994).* Dimensions of Creativity. M.I.T. Press.
    • In this book, the authors explore how creative ideas arise, and whether creativity can be objectively defined and measured.
  • Cardoso, A., & Bento, C. (Eds.). (2006).* Computational Creativity [Special issue]. Journal of Knowledge-Based Systems, 19(7).
    • This special issue is focused on characterizing and establishing computational models of creativity. The papers encompass four topics: models of creativity, analogy and metaphor in creative systems, multiagent systems, and formal approaches to creativity.
  • Carnovalini, F., & Rodà, A. (2020). Computational creativity and music generation systems: An introduction to the state of the art. Frontiers in Artificial Intelligence, 3(14). https://doi.org/10.3389/frai.2020.00014
    • This article surveys the landscape of Music Generation, a subfield of computational creativity that focuses on algorithmically produced music. Providing a substantial introduction to the topic, the authors outline creativity in computational and human terms and review past challenges surrounding music generation systems. They provide current research on improvements to these challenges and suggest future possibilities. 
  • Clancey, W. J. (1997). Situated cognition: On human knowledge and computer representations. Cambridge University Press.
    •  This book explores and explains the new ‘situated cognition’ movement in cognitive science. This is a new metaphysics of mind; a dynamical-systems-based, ecologically oriented model of the mind. Researchers suggest that a full understanding of the mind will require systematic study of the dynamics of interaction among mind, body, and world.
  • Colton, S., & Wiggins, G. A. (2012). Computational creativity: The final frontier? Ecai, 12, 21-26. https://doi.org/10.3233/978-1-61499-098-7-21
    • This paper argues Computational Creativity constitutes a frontier for AI research beyond all others. The authors do so through an exploration of the field of computational creativity via a working definition; a brief history of seminal work; an exploration of the main issues, technologies and ideas; and a look towards future directions.
  • Dodgson, M., et al. (2005). Think, play, do: Technology, innovation, and organization. Oxford University Press.
    • In this book, the authors argue that the innovation process is changing profoundly, partly due to innovation technologies. In response, the authors propose a new schema for the innovation process: Think, Play, Do.
  • Edwards, S. M. (2001). The technology paradox: Efficiency versus creativity. Creativity Research Journal, 13(2), 221-228.
    • This article aims to highlight the impact of technology on the ability of individuals to be creative within society. First, the authors review barriers that individuals must overcome to function creatively in the information age, along with the process by which creativity occurs. These factors are then presented alongside the consequences of technological and computational development. Finally, the authors offer suggestions on the coexistence of creativity and technology in the future.
  • Gizzi, E., et al. (2020). From computational creativity to creative problem solving agents. International Conference on Computational Creativity (ICCC).
    • This article introduces creative problem solving (CPS) as a skill for AI that builds on computational creativity. In defining CPS, the authors adopt an interdisciplinary model using problem-solving concepts from AI and aspects of computational creativity. 
  • Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A human–machine communication research agenda. New Media & Society, 22(1), 70-86. https://doi.org/10.1177/1461444819858691
    • This paper addresses the gap between communication theory and AI. With new and growing interactions between humans and technologies, communication theory is faced with the challenge of understanding these new relations, which do not fit into existing paradigms. This paper discusses these challenges through a human-machine communication (HMC) framework, focusing on the functional, relational, and metaphysical aspects of AI. 
  • Jordanous, A. (2012). A standardised procedure for evaluating creative systems: Computational creativity evaluation based on what it is to be creative. Cognitive Computation, 4(3), 246-279.
    • In this paper, the authors aim to address the issue of defining what it means for a computer to be creative; given that there is no consensus on this for human creativity, its computational equivalent is equally nebulous. Thus, this paper proposes a Standardised Procedure for Evaluating Creative Systems (SPECS) to measure and define computational creativity. SPECS methodology is then demonstrated through a comparative case study evaluating computational creativity systems that improvise music.
  • Kantosalo, A., & Jordanous, A. (2020). Role-based perceptions of computer participants in human-computer co-creativity [Paper presentation]. 7th Computational Creativity Symposium at AISB 2020, London, UK. https://kar.kent.ac.uk/id/eprint/80484
    • This paper explores the place of the computer in creative collaborations between humans and computers, and the past definitions of these positions. In looking at both the positive and negative aspects of these roles, this paper seeks to understand the potential for computers in human-computer co-creativity. Through analysis and a comparative review, the authors consider both the current roles of co-creative computer systems and future possibilities. 
  • Langley, P., Simon, H., Bradshaw, G. L., and Zytkow, J. (Eds.) (1986).* Scientific discovery: Computational explorations of the creative process. MIT Press.
    • Scientific Discovery examines the nature of scientific research and reviews the arguments for and against a normative theory of discovery. This examination is done in the context of a series of artificial intelligence programs developed by the authors that can simulate the human thought processes used to discover scientific laws.
  • McCorduck, P. (1991).* Aaron’s Code: Meta-art, artificial intelligence, and the work of Harold Cohen. W. H. Freeman and Company.
    • This book examines the connection between art and computer technology. This is done through an exploration of the work of the artist Harold Cohen, who created an elaborate computer program that makes drawings autonomously, without human intervention.
  • Montal, T., & Reich, Z. (2017). I, robot. You, journalist. Who is the author? Authorship, bylines and full disclosure in automated journalism. Digital Journalism, 5(7), 829-849.
  • This paper explores the increasing reliance on algorithms to generate news automatically, particularly in the form of algorithmic authorship. The use of this technology has potential psychological, legal and occupational implications for news organizations, journalists, and their audiences. The authors argue for a consistent and comprehensive crediting policy that sponsors public interest in automated news.
  • Moruzzi, C. (2021). Measuring creativity: An account of natural and artificial creativity. European Journal for Philosophy of Science, 11. https://doi.org/10.1007/s13194-020-00313-w
    • This paper addresses a gap in current discussions about creativity: how creativity should be measured. The author provides a model of creativity that is not anthropocentric in nature, opening it up to possibilities in exploring non-human and artificial creativity. This framework focuses on internal features of creativity, mainly problem-solving, evaluation, and naivety. 
  • Norman, D. (2014). Things that make us smart: Defending human attributes in the age of the machine. Diversion Books.
    • In this book, Norman argues in favor of a person-centered redesign of the machines that surround our lives. The book explores the complex interaction between human thought and the technology it creates. The author argues that the machines we create begin to shape how we think and, at times, even what we value, and thus argues in favor of redevelopment of machines that fit our minds, rather than minds that must conform to the machine.
  • Partridge, D., & Rowe, J. (1994).* Computers and creativity. Intellect Books.
    • Through a computational modelling perspective, this book examines theories and models of the creative process in humans. This is done through an exploration of both input creativity – the analytic interpretation of input information, and output creativity – the artistic, synthetic process of generating novel innovations.
  • Paul, E. S., & Scott, B. K. (Eds.). (2014).* The philosophy of creativity: New essays. Oxford University Press.
    • In this book, the authors argue that creativity should be explored in connection to, and in the context of, philosophy. The aim is to illustrate the value of interdisciplinary exchange and explore issues such as the role of consciousness in the creative process, whether great works of literature give us insight into human nature, whether a computer program can really be creative, and the definition of creativity. 
  • Pérez y Pérez, R., & Ackerman, M. (2020).* Towards a methodology for field work in computational creativity. New Generation Computing, 34(4), 713-737. https://doi.org/10.1007/s00354-020-00105-z
    • This paper focuses on fieldwork in computational creativity and provides a methodology for this work. The authors look at fieldwork in terms of what it means to make a creative computer system highly accessible and the influence these systems can have when interacting with society. Reflecting on their experience of making their systems ALYSIA and MEXICA widely available, the authors propose a flexible five-step methodology with the hopes that it can be broadly tested throughout the computational creativity community. 
  • Ritchie, G. (2019).* The evaluation of creative systems. In T. Veale & F. A. Cardoso (Eds.), Computational Creativity: The philosophy and engineering of autonomously creative systems (pp. 159-194). Springer. https://doi.org/10.1007/978-3-319-43610-4_8
    • This chapter looks at methods for the evaluation and assessment of computational creativity and creative systems. It highlights how there is a lack of standard methodology for assessment, and it questions both what creative properties should be focused on and how these properties should be measured. 
  • Sarkar, A., & Cooper, S. (2020).* Towards game design via creative machine learning (GDMCL). 2020 IEEE Conference on Computational Intelligence and Games, CIG, 744-751. https://doi.org/10.1109/CoG47356.2020.9231927
    • This article questions the lack of creative tasks assigned to machine learning systems in game design, despite the emergence of this practice in other areas. Adopting creative methods for machine learning in visual art and music, the authors argue for similar approaches in game design and reinvent these techniques as a whole as Game Design via Creative Machine Learning. 
  • Schmidhuber, J. (1997).* Low-complexity art. Leonardo, Journal of the International Society for the Arts, Sciences, and Technology, 30(2), 97-103.
    • This article relates and explores the relation between the depiction of the general essence of objects; viewed as the computer-age equivalent of minimal art to informal notions such as “good artistic style” and “beauty.” In an attempt to formalize certain aspects of depicting the essence of objects, the author proposes and analyses this art form they refer to as low-complexity art.
  • Sözbilir, F. (2018). The interaction between social capital, creativity and efficiency in organizations. Thinking Skills and Creativity, 27(1), 92-100. https://doi.org/10.1016/j.tsc.2017.12.006
    • This paper discusses the factor of social capital in organizations in relation to creativity and efficiency. The study uses participants in a public Turkish organization and concludes that social capital has a positive impact on both creativity and efficiency. It further finds that there is a positive link between creativity and efficiency.  
  • Sternberg, R. J., & Lubart, T. I. (1995). Defying the crowd: Cultivating creativity in a culture of conformity. Free Press.
    • This book examines how institutions as business and education often impede the creative process and how the creative person typically finds ways to subvert those institutions to promote his or her ideas. Furthermore, by presenting a theory as to how institutions can learn to foster creativity. Sternberg explores how persons can learn to become more creative.
  • Varshney, L. R., et al. (2013). Cognition as a part of computational creativity. In IEEE 12th International Conference on Cognitive Informatics and Cognitive Computing (pp. 36-43).
    • This paper examines the relationship between two distinct fields that have developed in a parallel fashion: computational creativity and cognitive computing. The authors then argue that concluding that the two fields overlap in one precise way: the evaluation or assessment of artifacts with respect to creativity.
  • Veale, A., et al. (2006).* Computational Creativity [Special Issue]. New Generation Computing, 24(3).
    • A pure definition of creativity—pure, at least, in the sense of being metaphor-free and grounded in objective fact—presents as an elusive phenomenon to study, made all the more vexing by our fundamental inability to pin it down in formal terms. In this special issue, the contributing authors present their respective definitions of creativity.
  • Wiggins, G. A. (2006).* A preliminary framework for description, analysis and comparison of creative systems. Journal of Knowledge Based Systems, 19(7), 449-58.
    • This article summarizes and explores concepts presented in and arising from Margaret Boden’s (1990) descriptive hierarchy of creativity. By formalizing the ideas Boden proposes, the author argues that Boden’s framework is more uniform and more powerful than it first appears. Finally, the paper explores potential routes to achieve a model which allows detailed comparison, and hence better understanding, of systems that exhibit behavior that would be called ‘‘creative’’ in humans.

Chapter 28. Perspectives on Ethics of AI: Philosophy (David J. Gunkel)⬆︎

  • Anderson, M., & Anderson, S. L. (Eds.).* (2011). Machine ethics. Cambridge University Press.
    • This collection of essays by philosophers and artificial intelligence researchers focuses on ways to enable machines to function in an ethically responsible manner, both within their interactions with humans and, as machines evolve, when engaging in their own decision making. These essays discuss how machines that function autonomously might be accorded ethical capacity, and whether ethically directed enhancements to machines are advisable or necessary. 
  • Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence (pp. 316–334). Cambridge University Press.
    • This chapter surveys some of the ethical challenges that may arise from the creation of thinking machines that could potentially harm humans and other morally relevant beings. Addressing how to assess whether, and in what circumstances, thinking machines might have moral status, the authors ask how humans might ensure that machines of advanced intelligence are operated safely, and towards purposes that benefit society. 
  • Brooks, R. A. (2003). Flesh and machines: How robots will change us. Vintage.
    • In this book, Rodney A. Brooks, director of the MIT Artificial Intelligence Laboratory, outlines the history of robots, investigates the changing relationships between humans and robots, and explores the growing role that robots could play in human society. In doing so, Brooks considers the eventual likelihood of human interaction with robots that will think, feel, repair themselves, and even reproduce.
  • Buckner, C. (2019). Deep learning: A philosophical introduction. Philosophy Compass, 14(10). https://doi.org/10.1111/phc3.12625
    • This paper offers an accessible review of the main features of Deep Convolutional Neural Networks (DCNNs), and a comparison of deep and shallow network architectures. Even as DCNNs have exceeded the predicted upper limits on artificial intelligence performance, e.g., in defeating world champions in strategy games as complex as Go and chess, the author states that there remains no universally accepted explanation as to why they work so well. Considering this question, the author canvasses three related lines of inquiry that could be profitably explored by philosophers of mind and cognitive science.
  • Buckner, C. (2018). Empiricism without magic: Transformational abstraction in deep convolutional neural networks. Synthese, 195, 5339–5372. https://doi.org/10.1007/s11229-018-01949-1
    • Addressing Deep Convolutional Neural Networks (DCNNs), this paper considers recent engineering advances in the context of philosophy of mind. The author argues that DCNNs are successful across a number of domains because they model a distinctive kind of abstraction from experience. On the philosophical side, this engineering achievement vindicates some themes from classical empiricism about reasoning. It supports the empiricist idea that information abstracted from experience enables high-level reasoning in strategy games like chess and Go. 
  • Clarke-Doane, J. & Baras, D. (2021). Modal security. Philosophy and Phenomenological Research102(1), 162–183. https://doi.org/10.1111/phpr.12643
    • In a discussion that is relevant to a learning machine’s eventual capacity to approximate self-interruption, hesitation or “doubt” regarding its own conclusions, this paper critically addresses six objections to the principle of Modal Security. Supposing that a belief “that P” is initially justified, the author asks, how can new evidence defeat that justification? One way is simple: by being evidence that P is false (rebutting evidence). However, epistemologists commonly recognize a second type of “undercutting” or “undermining” defeater. How could evidence defeat the justification of the belief “that P” without being evidence that P is false? Intuitively, it must show that there is some important epistemic feature that the belief “that P” lacks, such as the absence of a clear explanation for a causal relationship. 
  • Coeckelbergh, M. (2012).* Growing moral relations: Critique of moral status ascription. Palgrave Macmillan.
    • Considering the fundamental question of moral status ascription, i.e., who or what is morally significant, this book confronts the insufficiency of the properties approach. The properties approach draws hard lines between what does, or does not, possess certain properties (e.g., rationality, speech, sentience) that qualify an entity for moral status. Recognizing a current paradigm shift in moral thinking, the author presents an original philosophical approach to a relational, phenomenological, and transcendental reconsideration of moral status that observes how moral status is not a fixed state of being as much as a status that actively comes to be within ongoing interactions between entities. 
  • Collier, J. (2008). Simulating autonomous anticipation: The importance of Dubois’ conjecture. BioSystems, 91, 346–354. https://doi.org/10.1016/j.biosystems.2007.05.011
    • Drawing from philosophy of biology, this paper presents the temporal category of anticipation that is shared by both autonomous and living systems. Anticipation allows a system to adapt to external or internal conditions that have not yet materialized. Stating that autonomous systems self-regulate to increase their functionality, and living systems self-regulate to increase their own viability, the author asserts that increasingly strong conditions of anticipation, autonomy, and viability can offer insight into progressively stronger classes of autonomy. Such insight, the author argues, could have consequences for the accurate simulation of living systems.
  • Conitzer, V. (2016). Philosophy in the face of artificial intelligence. arXiv:1605.06048v1 [cs.AI]
    • This paper defends the use of a philosophical lens in the field of artificial intelligence. Recognizing that AI research labs at universities are typically housed in computer science, not philosophy, departments, and that most of the technical progress on AI is reported at scientific conferences, this paper asserts that the philosophical lens is useful to ground the study of AI in the context of interaction between humans and machine intelligence. Philosophy also offers a qualitatively alternative timeframe to the AI ethics question. Instead of perpetually working to fix observed errors post-hoc, a modal approach enables anticipatory questions of what benefits and harms could be possible, and what would be necessary to achieve AI’s optimal role and in the human setting. 
  • Dennett, D. C. (2017).* Brainstorms: Philosophical essays on mind and psychology. MIT Press.
    • In a collection of essays that approach points of intersection between fields of philosophy of mind, cognitive psychology, and artificial intelligence, the author weaves an interdisciplinary set of approaches addressing abstraction, concreteness, and practical solution application. Within investigations that illustrate how each of these three fields is enriched by the other two, the author examines how assumptions regarding consciousness might obscure insightfully rich similarities between human and artificial intelligences.
  • Feenberg, A. (1991). Critical theory of technology. Oxford University Press.
    • Bringing a varied literature of critical theory and democratic/socialist philosophy to questions of human-machine interaction, this book surveys such concepts as alienation, ambivalence, instrumentalization, civilization change, capitalist hegemony and workers’ control. The author constructs a critical ideology that draws from Habermas, Foucault, Lukacs, Marcuse, Hegel, and Marx. This ideology points towards the need for democratization of the technology industry rather than the elevation of institutions of exclusive control.
  • Ganascia, J-G. (2010). Epistemology of AI revisited in the light of the philosophy of information. Knowledge Technology & Policy, 23, 57–73. https://doi.org/10.1007/s12130-010-9101-0
    • This paper considers the epistemology of artificial intelligence in light of the opposition between the “sciences of nature” and the “sciences of culture,” as introduced by German neo-Kantian philosophers. The author demonstrates how this epistemological view illuminates many contemporary applications of artificial intelligence. This paper situates these perspectives in the context of philosophy of information, emphasizing the role played in artificial intelligence by the notions of context and abstraction level.
  • Gunkel, D. J. (2012).* The machine question: Critical perspectives on AI, robots, and ethics. MIT Press.
    • The Machine Question is an investigation into the assignment of moral responsibilities and rights to intelligent and autonomous machines of our own making. This book takes up the “machine question”: whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. 
  • Hall, J. S. (2001, July 5). Ethics for machines. KurzweilAI.net. http://www.kurzweilai.net/ethics-for-machines
    • Offering a brief historical and topological survey of the ethics field, nanotechnologist J. Storrs Hall considers human moral duties to machines and machine moral duties to humans. The author states that these questions are of current significance due to the possibility of creating an advanced intelligence that could exceed some, or even many, human capabilities. The author approaches the related hypothesis that more advanced machines could also, in theory, be “superethical,” or more ethical, than their human interlocutors.
  • Heidegger, M. (1977). (W. Lovitt, Trans.). The question concerning technology. Harper & Row. (Original work published 1954).
    • Heidegger’s text addresses the creation or making of objects of human use. In the case of the craftsperson who makes an object, there is a bringing-forth of an object’s essential purpose; in this instance, the human and object are said to participate in a co-creative setting. However, in the case of technology, the human “sets” a system upon nature, in a challenging-forth of a new sort of object from a process controlled by the human alone. This conceptual putting-upon, the author states, introduces efficiency and maximization as components of usefulness and instills a changed relation between the human and the object.   
  • Hooker, J. & Kim, T. W. (2019). Truly autonomous machines are ethical. The AI Magazine, 40(4), 66–73. https://doi.org/10.1609/aimag.v40i4.2863
    • Developing upon conceptions of autonomy drawn from philosophical literature, this article questions whether AI ethics should be conceived of solely in terms of external constraint. Approaching ethics alternatively as an internal constraint on autonomy, this article provides a counterargument to conventional warnings against machine intelligences being granted powers of independent choice. Acknowledging that giving machines unchecked independence is a source of risk, the authors discuss approaches from philosophy that distinguish autonomy as a necessary and desirable capacity for ethical choice. Within a setting of internally governed independence, the article suggests that a truly autonomous machine must also be an ethical machine.
  • Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204. https://doi.org/10.1007/s10676-006-9111-5
    • This paper situates computer systems as moral entities but not as moral agents. Identifying the five conditions of the traditional account of moral agency known to contemporary action theory, this paper states that computer system behavior meets four of the five conditions. Computer systems do not exhibit evidence of the key condition of internal mental state from which the free decision required of agency could emit. However, the author states, the system does exhibit intentionality; actions taken upon preexisting intention do introduce the setting of some form of moral status. Further, computer systems interact with humans within moral decisions, and have an impact upon the methods and consequences of moral decisions made by humans.  This paper argues that computer systems are not moral agents but do act as moral entities in a way that distinguishes them from natural objects that act only from necessity. As such, this paper addresses an alternative category of moral status for computer systems.
  • Korb, K. B. (2004). Introduction: Machine learning as philosophy of science. Minds and Machines, 14, 433–440. https://doi.org/10.1023/B:MIND.0000045986.90956.7f
    • Examining the fields of machine learning studies and philosophy of science in terms of three categories (of methodology, theory, and inductive simplicity), this paper proposes that these fields will coalesce if the human context of scientific reasoning can be represented algorithmically. Asking if scientists’ inductive strategies could be implemented on universal machines, the paper states that earlier traditions of logicism have since given way to more active research agendas for example, those using artificial neural networks, genetic algorithms, and probabilistic reasoning systems. As these methods seek to implement inductive inference in complex environments of uncertain information, the author states that this shift recognizes the aim of producing an autonomous artificial agent that can cope with an a priori unknown world. 
  • Lappin, S., & Shieber, S. M. (2007). Machine learning theory and practice as a source of insight into universal grammar. Journal of Linguistics, 43, 393–427.   https://doi.org/10.1017/S0022226707004628
    • This paper examines whether machine learning approaches to natural language processing developed in engineering-oriented computational linguistics could provide scientific insights into the nature of human language. The authors state that while it is uncontroversial that the learning of a natural language (or of anything else) requires assumptions concerning the structure of the phenomena being acquired, machine learning can have a more specific role in demonstrating the viability of particular language models as learning mechanisms. To the extent that the bias of a successful model is defined by a comparatively weak set of language-specific conditions, the authors state, task-general machine learning methods will be relied upon to explain the possibility of acquiring linguistic knowledge.
  • Lin, P., et al. (Eds.). (2012).* Robot ethics: The ethical and social implications of robotics. MIT Press.
    • This book collects 22 chapters contributed by noted researchers and theorists across a number of disciplines. Considering aspects of robots in social usage, the chapters are categorized into six thematic sections: Design and Programming, Military, Law, Psychology and Sex, Medicine and Care, and Rights and Ethics. As the function of this volume is to initiate dialogue rather than impute already-decided dogma, the authors focus upon the complex questions that both real and hypothetical settings will generate.
  • Lin, P., et al. (Eds.). (2017).* Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.
  • This edited volume, aimed at academic audiences, policymakers, and the broader public, presents a global and interdisciplinary collection of essays that focuses on emerging issues in the interdisciplinary field of “robot ethics.” This field studies the effects of robotics on ethics, law, and policy. Organized into four parts, the first concerns moral and legal responsibility and questions that arise in programming under moral uncertainty. The second part addresses anthropomorphizing design and related issues of trust and deception within human-robot interactions. A third section concerns applications ranging from love to war. The fourth section speculates upon the possible implications and dangers of artificial beings that exhibit superhuman mental capacities.
  • Marcus, G. (2018). Innateness, AlphaZero, and Artificial Intelligence.  arXiv:1801.05667v1 [cs.AI] 
    • Addressing the concept of innateness, this paper returns to a long-unresolved question: how much of the human mind is built-in, and how much of it is constructed by ongoing experience? The author considers a recent series of papers concerning AlphaGo and its successors; these papers claim that it is possible to train a system to superhuman level, without human examples or guidance, “starting tabula rasa.” The author argues that such claims are overstated. Stating that artificial intelligence would benefit from greater attention to questions of innateness, the author points to opportunities and drawbacks of both reductive and top-down approaches to discerning how much innateness might be required for AI learning.
  • Mittelstadt, B., et al. (2019). Explaining explanations in AI. In FAT ’19: Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 279–288). Association for Computing Machinery. https://doi.org/10.1145/3287560.3287574
    • This paper approaches the simplified models that are built to approximate and predict what decisions will be made by a complex system. The authors focus upon the distinctions between these models, and explanations offered in fields of philosophy, law, cognitive science, and the social sciences, arguing that the simplified approximations of complex decision-making functions are generally more like scientific models than the types of “everyday” explanations encountered in fields such as philosophy or cognitive science. If this comparison holds, the authors claim, this could result in locally reliable but globally misleading explanations of model functionality. The authors recommend explanations that fit three criteria of accessibility: explanations must be contrastive, selective, and socially interactive. 
  • Powers, T. M. (2017). Philosophy and computing: Essays in epistemology, philosophy of mind, logic, and ethics. Springer International Publishing AG.
    • This volume features papers from CEPE-IACAP 2015 held from June 22–25, 2015, at the University of Delaware. The organizing themes of the conference include theoretical topics at the intersection of computing and philosophy. The assembled essays explore current issues in epistemology, philosophy of mind, logic, and philosophy of science, as well as normative topics on matters of ethical, social, economic, and political import. All of the essays provide views of their subject matter through a lens of computation; the contributors assess the ways that computation is expected to change philosophical inquiry and vice versa.
  • Reeves, B., & Nass, C. I. (1996).* The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press.
    • The Media Equation presents the results of numerous psychological studies that have led to the conclusion that people treat computers, TV and new media as real people and places. One of the conclusions of these studies is that the human brain has not evolved quickly enough to assimilate 20th-century technology. This book details how this knowledge can help us better design and evaluate media technologies, including computer and Internet software, TV entertainment, news, and advertising, and multi-media.
  • Scalable Cooperation at MIT Media Lab. (n.d.). Moral machine. http://moralmachine.mit.edu
    • Moral Machine is a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. It shows the user moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, the user judges which outcome they think is more acceptable. The user can then see how his/her responses compare with those of other people. 
  • Schmidhuber, J. (2009). Ultimate cognition à la Gödel. Cognitive Computation, 1, 177–193. https://doi.org/10.1007/s12559-009-9014-y
    • This paper describes an agent-controlling program that speaks about itself and is able and ready to rewrite itself in an arbitrary fashion once it has found proof that this self-rewrite is useful (according to a user-defined utility function). Discussing how the first 50 years of attempts at ‘‘general AI’’ and ‘‘general cognitive computation’’ were dominated by heuristic approaches, the author distinguishes heuristic and theorem-based approaches, the latter having more recently brought about the first mathematically sound, asymptotically optimal, universal problem solvers. In this setting, this paper examines how to overcome potential problems with self-reference and how to deal with the potentially delicate online generation of proofs that both talk about and affect the currently running proof generator itself.
  • Searle, J. R. (1984).* Minds, brains, and science. Harvard University Press.
    • Within the setting of the literature of philosophy of mind, the author asserts that the traditional intuitive view of humans as conscious, free, rational agents does not contradict a universe that science conveys in terms of “mindless physical particles.” Rather, the truths of common sense and the truths of science need not be artificially divided. Rejecting the illusion of their irreconcilability can, the author asserts, have notable implications for how artificial intelligences and machine collaborators are conceived of and created. 
  • Turner, J. (2018).* Robot rules: Regulating artificial intelligence. Springer.
    • Robot Rules argues that AI is unlike any other previous technology, owing to its ability to take decisions independently and unpredictably. This gives rise to three issues:  responsibility—who is liable if AI causes harm; rights—the disputed moral and pragmatic grounds for granting Ai legal personality; and the ethics surrounding the decision-making of AI. The book suggests that in order to address these questions we need to develop new institutions and regulations on a cross-industry and international level.  
  • Tzafestas, S. G. (2016).* Roboethics: A navigating overview. Springer.
    • Per the title’s claim of a navigating overview, this book offers what the author calls “a spherical picture” of the evolving field of roboethics. Initial chapters of this book outline fundamental concepts and theories of ethics and applied ethics alongside fundamental concepts in the field of artificial intelligence. Presenting a robot typology organized according to kinematic structure and locomotion, and then upon the artificial intelligence tools that give intelligence capabilities to robots, the book proceeds to chapters addressing robot applications (e.g., in medicine, society, space, and the military) for which ethical issues must be addressed. A latter chapter provides a conceptual study of the “brain-like’’ capabilities of “mental robots,” discussing the features of more specialized processes of learning and attention.  
  • University of Oxford Podcasts. (n.d.). Ethics in AI. https://podcasts.ox.ac.uk/series/ethics-ai
    • This set of 24 podcasts recorded between November 2019 and December 2020 as the “Oxford University Institute for Ethics ‘Ethics in AI’ seminars” seeks to open a broad, interdisciplinary conversation between the University’s researchers and students in several interrelated disciplines, including Philosophy, Computer Science, Engineering, Social Science, and Medicine. Topics include privacy, information security, appropriate rules of automated behavior, algorithmic bias, transparency, and the potential wider threats on society that AI could present.
  • Wallach, W., & Allen, C. (2009).* Moral machines: Teaching robots right from wrong. Oxford University Press.
    • The project of developing an artificial moral agent offers an extraordinary lens upon human moral decision-making. Approaching both distinctions and integrations of top-down and bottom-up design approaches, the authors acknowledge that the context involved in real-time moral decisions, as well as the complex intuitions people have about right and wrong, make the prospect of reducing ethics to a logically consistent principle or set of programmable laws at best suspect, and at worst irrelevant. However, the authors state, the project of developing an artificial moral agent offers opportunities for experimentation and questioning of various integrations of top-down and bottom-up approaches that comprise moral decision making.
  • Wallach, W., & Asaro, P. (Eds.). (2017).* Machine ethics and robot ethics. Routledge.
    • Machine Ethics and Robot Ethics addresses the ethical challenges posed by the rapid development of and widespread use in everyday life of advancing technologies such as artificial intelligence, robotics and machine learning. This book is a collection of essays that focus on the control and governance of computational systems; the exploration of ethical and moral theories using software and robots as laboratories or simulations; the inquiry into the necessary requirements for moral agency and the basis and boundaries of rights; and questions of how best to design systems that are both useful and morally sound. Collectively the essays ask what the practical ethical and legal issues, arising from the development of robots, will be over the next twenty years and how best to address these future considerations.
  • Wiener, N. (1988).* The human use of human beings: Cybernetics and society (No. 320). Da Capo Press.
    • The Human Use of Human Beings examines the implications of cybernetics, the study of the relationship between computers and the human nervous system, for education, law, language, science, and technology. This book outlines Wiener’s complex vision which involved scenarios where machines would release people from relentless and repetitive drudgery in order to achieve more creative pursuits. It also outlined his realization of the danger of dehumanizing and displacement posed by his vision.

Chapter 29. The Complexity of Otherness: Anthropological Contributions to Robots and AI (Kathleen Richardson)⬆︎

  • Ali, S. (2019). “White crisis” and/as “existential risk,” or the entangled apocalypticism of artificial intelligence. Zygon Journal of Religion and Science, 54(1), 207–224. https://doi.org/10.1111/zygo.12498
    • This paper presents a critique of Robert Geraci’s Apocalyptic artificial intelligence (AI) discourse. It explores “white crisis,” a modern racial phenomenon with religious origins, in relation to the existential risk associated with apocalyptic AI. Adopting a decolonial and critical race-theory viewpoint, the author argues that the rhetoric of white crisis and apocalyptic AI should be understood as part of a trajectory of domination that the author terms “algorithmic racism.”
  • Appadurai, A. (1986).* Introduction: Commodities and the politics of value. In A. Appadurai (Ed.), The social life of things: Commodities in cultural perspective. Cambridge University Press.
    • This book chapter argues that anthropologists should study ‘things’: instead of assuming that humans assign significance to things, anthropologists should consider how things take shape, acquire value, and move through space. The movement of things and commodities across different contexts sheds light on the social context they inhabit.  
  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.
    • Automation has the potential to deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. This book presents the concept of the “New Jim Code:” a range of discriminatory designs that encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. This book makes a case for race as itself as a kind of technology, designed to sanctify social injustice in the architecture of everyday life.
  • Boellstorff, T. (2008).* Coming of age in second Life: An anthropologist explores the virtually human. Princeton University Press.
    • One of the most famous digital ethnographies, this book shows how virtual worlds can change ideas about identity and society. Based on two years of fieldwork in Second Life, living among and observing its residents just as anthropologists have traditionally done to learn about cultures in the real world, this ethnography shows how anthropological methods can be applied to virtual sociality.
  • Cave, S. (2019).* Intelligence as ideology: Its history and future [Keynote Lecture]. Centre for Science and Policy Annual Conference. http://www.csap.cam.ac.uk/media/uploads/files/1/csap-conference-2019-stephen-cave-presentation.pdf
    • This keynote lecture problematizes the concept of intelligence, showing how it is not only impossible to reliably measure but also – as the measurement of what it means to be human – became associated with evolutionary paradigms, colonial rule, and the ‘survival of the fittest.’ Intelligence importantly works to justify elite domination over others: the poor, women, people with disabilities, and so on.
  • Colloc, J. (2016). Ethics of autonomous information systems towards an artificial thinking. Les Cahiers du Numérique, 12(1-2), 187–211.
    • This article, originally published in French, focuses on how to build autonomous machines using artificial intelligence (AI). The article compares the process of ethical decision-making on the part of humans with the potential cognitive capabilities of these machines. The article then considers the ethical implications of autonomous machines, specifically how such systems affect humanity. 
  • Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311–313. https://doi.org/10.1038/538311a
    • This article argues that there is a blind spot in AI research, namely that in spite of the rapid and widespread deployment of AI, agreed-upon methods to assess the sustained effects of such applications on human populations are lacking. The authors examine three dominant modes used to address the ethical and social risks of AI: compliance, values in design, and thought experiments. The authors argue for a fourth approach: a practical and broadly applicable social-systems analysis which thinks through all the possible effects of AI systems on all parties.
  • Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer.
    • This book emphasizes the ethical implications of artificial intelligence (AI) technologies and systems. The book addresses these issues through discussions of the design, construct, use, and integrity of these technologies and the team behind them. It critiques the moral decisions of these autonomous systems, specifically the methodologies behind these creations and the moral, legal, and ethical values they uphold. 
  • Dourish, P. (2016). Algorithms and their Others: Algorithmic culture in context. Big Data & Society, 3(2). https://doi.org/10.1177%2F2053951716665128
    • Using Niklaus Wirth’s 1975 formulation that “algorithms + data structures = programs” as a launching-off point, this paper examines how an algorithmic lens shapes the way in which we might inquire into contemporary digital culture. Algorithms, once obscure objects of technical art, are integral to artificial intelligence today. This paper explores what it means to adopt the algorithm as an object of analytic attention, showing what it shows and reveals.
  • Forsythe, D. (2002).* Studying those who study us: An anthropologist in the world of artificial intelligence. Stanford University Press.
    • This essay collection book presents an anthropological study of artificial intelligence and informatics, asking how expert systems designers imagine users and in turn, how humans interact with computers. It analyzes the laboratory as a fictive kin group that reproduces gender asymmetries, offering a reflexive ethnographic perspective on the cultural mechanisms that support the persistent male domination of engineering.
  • Geertz, C. (1973).* Thick description: Toward an interpretative theory of culture. In The interpretation of cultures: Selected essays (pp. 3–32). Basic Books.
    • This essay articulates the central method of interpretive anthropology, explaining how ethnographers write and think about cultural situations. Contrasting ‘thick’ description – which includes cultural background and layered meanings – with ‘thin’ or merely factual accounts, Geertz shows how ethnographers bring in context to explain how behavior becomes meaningful. 
  • Haraway, D. (1991).* A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, cyborgs and women: The reinvention of nature (pp. 149–181). Routledge.
    • This essay articulates a feminist theory of the cyborg: a half human half machine hybrid. The figure of the cyborg dissolves the boundaries between nature and artifice, animal and human, and physical and non-physical – Haraway takes this up as an opportunity for feminists to think beyond the duality of identity politics and form new political alliances.
  • Helmreich, S. (2000).* Silicon second nature: Culturing artificial life in a digital world. University of California Press.
    • This book presents an ethnographic study of the people and programs connected with an unusual hybrid of computer science and biology. Through detailed dissections of artifacts in the context of artificial life research, Helmreich shows that the scientists working on this see themselves as masculine gods of their cyberspace creations, bringing longstanding mythological and religious tropes concerning gender, kinship, and race into their research domain.
  • Hersh, M. A. (2016). Engineers and the other: The role of narrative ethics. AI & Society, 31(3), 327–345. https://doi.org/10.1007/s00146-015-0594-7
    • This article uses two case studies to argue for the importance of macroethics. The author highlights that acknowledging cultural diversity is crucial in advocating for the collective responsibility of highly unethical artificial intelligence (AI) technologies. The author argues that ethical behaviour should not merely be seen as an individual effort or responsibility. Instead, it should be considered as a collective action.
  • Hicks, M. (2017).* Programmed inequality: How Britain discarded women technologists and lost its edge in computing. MIT Press.
    • Drawing on government files, personal interviews, and the archives of major British computer companies, this book exposes the myth of technical meritocracy by tracing how computer labor was masculinized between the 1940s and today. Women were central to the growth of high technology from World War II to the 1960s, when computing experienced a gender flip – this development caused a labor shortage and severely impeded both the growth of British computer industry and the success of the nation as a whole.
  • Jaume-Palasi, L. (2019). Why we are failing to understand the societal impact of artificial intelligence. Social Research, 86(2), 477–498.
    • This article aims to address the societal impact of algorithmic systems by considering how these technologies are representing the ideas and norms of society. The author argues that artificial intelligence (AI) does not understand and embody an individual, which suggests the risks of increased stereotypes and discrimination through the use of AI.. The author argues that viewing AI as a type of societal infrastructure is needed in order to adequately understand its impact.  
  • Kelty, C. (2008).* Geeks, social imaginaries, and recursive publics. Cultural Anthropology, 20(2), 185–214.
    • Based on fieldwork conducted in three countries, this article argues that the mode of association specific to “geeks” (hackers, lawyers, activists, and IT entrepreneurs) on the Internet is that of a “recursive public sphere” that is constituted by a shared imaginary of the technical and legal conditions of possibility for their own association. Geeks imagine their social existence and relations as much through technical practices (hacking, networking, and code writing) as through discursive argument (rights, identities, and relations), rendering the “right to tinker” with software a form of free speech.
  • Latour, B. (1993).* We have never been modern (C. Porter, Trans.). Harvard University Press.
    • This philosophical text defines modernity in terms of the separation between nature and society, human and thing, reality and artifice. Latour shows that this separation is theoretically powerful in science but does not play out in practice: an anthropological look at scientific practice reveals that everything is always already hybrid – reality and artifice cannot be separated. This book argues that the hybridity of nature and culture is central to the success of technoscientific practices.
  • Liberati, N., & Nagataki, S. (2018). Vulnerability under the gaze of robots: Relations among humans and robots. AI & Society, 34(2), 333–342.
    • The authors of this paper argue that any AI agents designed to replicate human-like activities are likely to fail if they do not embody the human body. The paper additionally examines the vulnerability of robots, arguing that interactions between humans and robots without similar body forms point to an inequal cohabitation between the two.
  • Miller, D., & Horst, H. (2012). The digital and the human: A prospectus for digital anthropology. In H. Horst & D. Miller (Eds.), Digital Anthropology (pp. 3–38). Bloomsbury Publishing.
    • This chapter articulates a vision for digital anthropology, defining anthropology as a discipline occupied with understanding what it is to be human and how humanity manifests differently across cultures, and the digital as everything that can be reduced to binary code. Miller and Horst argue for ethnographic work that emphasizes the continuity between the digital and the non-digital, the materiality of the digital, and the ultimately deeply local cultural ways in which technologies are received.
  • Noble, S. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
    • This book uses algorithmic search engines to show how data discrimination works. The combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color and especially Black women.
  • Richardson, K. (2015).* An anthropology of robots and AI: Annihilation anxiety and machines. Routledge.
    • This ethnography of robot-making in labs at the Massachusetts Institute of Technology (MIT) examines the cultural ideas that go into the making of robots, and the role of fiction in co-constructing the technological practices of the robotic scientists. The book charts the move away from the “worker” robot of the 1920s to the “social” one of the 2000s, using anthropological theories to describe how robots are reimagined as companions, friends and therapeutic agents.
  • Robertson, J. (2017).* Robo sapiens Japanicus: Robots, gender, family, and the Japanese nation. University of California Press.
    • An ethnography and sociocultural history of governmental and academic discourse of human-robot relations in Japan, this book explores how actual robots – humanoids, androids, and animaloids – are “imagineered” in ways that reinforce the conventional sex/gender system, the political-economic status quo, and a conception of the “normal” body. Asking whether “civil rights” should be granted to robots, Robertson interrogates the notion of human exceptionalism.
  • Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2). https://doi.org/10.1177%2F2053951717738104
    • This article articulates how algorithms might be approached ethnographically: as heterogeneous and diffuse sociotechnical systems, rather than rigidly constrained and procedural formulas. This involves thinking of algorithms not “in” culture, but “as” culture: part of broad patterns of meaning and practice that can be engaged with empirically. Practical tactics for the ethnographer then do not depend on pinning down a singular “algorithm” or achieving “access,” but rather work from the partial and mobile position of an outsider.
  • Shibuya, K. (2020). Digital transformation of identity in the age of artificial intelligence. Springer.
    • This book investigates the digital transformation of identity in artificial intelligence (AI). It uses a broad range of disciplines, including ethics, philosophy, and computer science, to examine the nature of identity in humans contrasted by AI technologies. 
  • Suchman, L. (2007).* Human-machine reconfigurations: Plans and situated actions. Cambridge University Press.
    • This book shows that debates over the status of human-like machines – whether they are ‘alive’ or not, different from the human or not – are improved when the question shifts to how humans and machines are enacted as similar or different in practice, and with what consequences. Calling for a move away from essentialist divides, this book argues for research aimed at tracing the differences within specific sociomaterial arrangements.

Chapter 30. Calculative Composition: The Ethics of Automating Design (Shannon Mattern)⬆︎

  • Allam, Z., & Dhunny, Z. A. (2019). On big data, artificial intelligence, and smart cities. Cities89, 80-91. https://doi.org/10.1016/j.cities.2019.01.032
    • This paper discusses various ways artificial intelligence can contribute to urban development to improve sustainability and liveability while supporting other dimensions of city planning including culture, metabolism, and governance. This paper is specifically meant to target policymakers, data scientists, and engineers who are interested in integrating artificial intelligence and big data into smart city designs. 
  • Bratton, B. (2015).* Lecture on A.I. and cities: Platform design, algorithmic perception, and urban geopolitics. Benno Premsela Lecture Series. https://bennopremselalezing2015.hetnieuweinstituut.nl/en/lecture-ai-and-cities-platform-design-algorithmic-perception-and-urban-geopolitics
    • Bratton argues that the project of creating a smart city will be futile in its attempt to create futuristic living conditions for humans, but instead will become habitats for future insects. This thesis was predicted in part because of the example of the failed Sanzhi Pod City in Taipei, which was overtaken by several species of orchid mantis. 
  • Bricout, J., et al. (2021). Exploring the smart future of participation: Community, inclusivity, and people with disabilities. International Journal of E-Planning Research (IJEPR), 10(2), 94-108. https://doi.org/10.4018/IJEPR.20210401.oa8
    • This paper explores the use of technology to promote civic engagement for people with disabilities, specifically its potential for the future of smart cities. The authors examine the challenges of virtual engagement in civic activities and propose a framework for better participation of people with disabilities in future smart communities.
  • Carpo, M. (2017).* The second digital turn: Design beyond intelligence. MIT Press.
    • In this book, Carpo argues that tools from the first digital turn in architecture that promoted significant development in styles, such as the use of curving lines and surfaces, have now promoted a second digital turn that impacts the way designers develop ideas. Machine learning has been employed to create extremely complex designs that humans could not think of themselves. 
  • Carta, S. (2019). Big data, code and the discrete city: Shaping public realms. Routledge.
    • This book provides an overview of the impact of digital technologies on public space, and actors involved in designing public space, policymakers, and individual citizens. 
  • de Waal, M., & Dignum, M. (2017). The citizen in the smart city. How the smart city could transform citizenship. it-Information Technology59(6), 263-273. https://doi.org/10.1515/itit-2017-0012
    • This article examines the relationship between smart cities and citizenship, introducing three potential smart city visions. First, The Control room is a city with a collection of infrastructure and services. Second, the Creative City is focused on local and regional innovations. Third, the Smart Citizens city deals with the potential of a smart city which has an active political and civil community. 
  • Foth, M. (2017). The next urban paradigm: Cohabitation in the smart city. IT-Information Technology59(6), 259-262. https://doi.org/10.1515/itit-2017-0034
    • This introductory article provides an overview of the special issue of IT-Information Technology on Urban Informatics and Smart Cities. 
  • Gunkel, D. J. (2018). Hacking cyberspace. Routledge.
    • Gunkel argues that metaphors used to describe new technologies actually inform how those technologies are created. Gunkel develops a view that considers how designers employ discourse in their technological development. 
  • Hebron, P. (2017, April 26).* Rethinking design tools in the age ofmachine learning. Medium. https://medium.com/artists-and-machine-intelligence/rethinking-design-tools-in-the-age-of-machine-learning-369f3f07ab6c
    • Hebron examines the widespread availability of technological creative tools that allow an individual to create on a computer or mobile phone. He argues that machine learning should aim to make creative processes easier for human actors, but not do any creative work themselves, in order to preserve human originality. 
  • Johnson, P. A., et al. (2020). Type, tweet, tap, and pass: How smart city technology is creating a transactional citizen. Government Information Quarterly37(1), 101414. https://doi.org/10.1016/j.giq.2019.101414
    • This article asks the question of whether or not the use of technology acts as a medium for a transactional relationship between governments and citizens. The authors highlight four models: type, tweet, tap and pass, using relevant literature and examples to flesh out the concept. They propose that governments consider the impact of a transactive relationship before they implement smart design technology.  
  • Karan, E., & Asadi, S. (2019). Intelligent designer: A computational approach to automating design of windows in buildings. Automation in Construction102, 160-169. https://doi.org/10.1016/j.autcon.2019.02.019
    • The process of designing buildings is becoming increasingly computerized. This paper describes a new system, the Intelligent Designer, that is capable of understanding and learning clients’ expectations and generating valid structural designs. The authors demonstrate this approach  through a window designing experiment.
  • Liao, Q., et al. (2020). Questioning the AI: Informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-15). Association for Computing Machinery.  https://doi.org/10.1145/3313831.3376590
    • This paper addresses the need to incorporate explainability features into AI systems. The authors conduct interviews with 20 UX designers to understand the gap between users’ expectations and the reality of how AI systems are designed. The author also discusses how AI systems can be created to be more user-centered through producing systematically generated explanations. 
  • Liu, L., et al. (2019). Toward AI fashion design: An attribute-GAN model for clothing match. Neurocomputing341, 156-167. https://doi.org/10.1016/j.neucom.2019.03.011
    • This paper highlights a new generative adversarial network (GAN) that can be used to generate clothing matches for fashion design based on clothing attributes such as color, texture, and shape. The authors also contribute a manually collected database of clothing attributes, which the GAN was trained on, and provide experimental results to support the effectiveness of their approach with other state-of-the-art methods. 
  • Luce, L. (2019).* Artificial intelligence for fashion: How AI is revolutionizing the fashion industry. Apress.
    • This reference work provides a basic outline of how AI is employed in the fashion industry, highlighting key terms and concepts. It provides a guide for designers, managers, and executives on how AI is impacting the field of fashion.  
  • Mattern, S. (2017).* A city is not a computer. Places Journal. https://placesjournal.org/article/a-city-is-not-a-computer/
    • In this article, Mattern critiques the totalizing idea of cities as computers employed by technology companies, arguing that this practice ignores the information provided by urban designers and scholars who have investigated how cities work for decades. 
  • Mattern, S. (2018).* Databodies in codespace. Places Journal. https://placesjournal.org/article/databodies-in-codespace/
    • Mattern discusses the efforts of technology companies through efforts such as the Human Project to quantify the human condition. She criticizes this goal in light of methodological and ethical risks of allowing private companies access to the amount of personal data required by these projects. 
  • Negroponte, N. (1973).* The architecture machine: Toward a more human environment. MIT Press.
    • This book provides a forward looking and optimistic account of what will occur when genuine human-machine dialogue is achieve, and man is able to work together with AI towards mutual goals. Negroponte uses systems theory philosophy to examine issues that can arise in these relationships. 
  • O’Donnell, K. M. (2018, March 2).* Embracing artificial intelligence in architecture. AIA. https://www.aia.org/articles/178511-embracing-artificial-intelligence-in-archit
    • O’Donnell argues that architects should learn about data and its application in order to work towards the incorporation of AI in their field, as development in this area will strengthen the profession.
  • Ranchordás, S. (2020). Nudging citizens through technology in smart cities. International Review of Law, Computers, & Technology34(3), 254-276. https://doi.org/10.1080/13600869.2019.1590928
    • Several previous works have shown that systematically nudging citizens can improve road safety and reduce night crime, and, when incorporated into smart cities, has the potential to further promote positive civic engagement and sustainability goals. However, these well-intended nudges also raise legal and ethical issues. This paper offers an interdisciplinary approach to analyzing the impact of collecting and using these data to influence the behaviour of those in smart cities. 
  • Retsin, G. (2019). Discrete: Reappraising the digital in architecture. John Wiley & Sons.
    • This book discusses the impact of two decades of digital experimentation in architecture, arguing that the digital focus on style and differentiation seems out of touch with a new generation of architects amid a global housing crisis. This book tracks a new body of work that uses digital tools to create discrete parts that can be used toward aims of open-ended and adaptable architecture. 
  • Ridell, S. (2019). Mediated bodily routines as infrastructure in the algorithmic city. Media Theory, 3(2), 27-62.
    • Ridell argues that there is a lack of development in the study of how bodies are mediated in the context of digital urban life. The article examines mediated bodily habits and routines, arguing that they are important to the infrastructure of a smart city. 
  • Sagredo-Olivenza, I., et al. (2017). Trained behavior trees: Programming by demonstration to support AI game designers. IEEE Transactions on Games11(1), 5-14. https://doi.org/10.1109/TG.2017.2771831
    • This paper introduces a new method for game designers to develop and test the behaviours of non-player characters in a video game with programming by demonstration and artificial intelligence. The authors present trained behaviour trees, a technique that is widely used in AI game development to create traces of the character behaviours in different scenarios, and combine this feature with programming by demonstration to allow game designers to fine-tune the expected responses in each situation. 
  • Sand, K. (2019). The transformation of fashion practice through instagram. In International Conference on Fashion communication: Between Tradition and Future Digital Developments (pp. 79-85). Springer.
    • This chapter uses a case study to investigate how social media platforms such as Instagram impact fashion practice, arguing that digital literacy skills are vital to success in the fashion industry. 
  • Steenson, M. W. (2017).* Architectural intelligence: How designers and architects created the digital landscape. MIT Press.
    • This book provides a historical overview of the overlap between the fields of architectural design and computer science.
  • Thomassey, S., & Zeng, X. (Eds.). (2018). Artificial intelligence for fashion industry in the big data era. Springer.
    • This book gives an overview of current issues in the fashion industry, such as the suitability of existing AI implementation. Each chapter gives an example of a data-driven AI application to all sectors of the fashion industry, including design, manufacturing, supply chains, and retail. 
  • Vetrov, Y. (2017, January 3).* Algorithm-driven design: How artificial intelligence is changing design. Smashing Magazine. https://www.smashingmagazine.com/2017/01/algorithm-driven-design-how-artificial-intelligence-changing-design/
    • Vetrov argues that designers should utilize artificial intelligence in order to maximize capabilities and allow themselves to prioritize tasks with ease. To do this, she recommends that designers support more digital platforms. 
  • Viros Martin, A., & Selva, D. (2019). From design assistants to design peers: Turning Daphne into an AI companion for mission designers. In AIAA Scitech 2019 Forum. Aerospace Research Central. https://doi.org/10.2514/6.2019-0402
    • This paper describes an updated version of Daphne, a virtual assistant for architecting satellite systems, that can proactively: (1) inform users of new design spaces to explore, (2) diversify user searches, and (3) function as a live recommender system to help users modify designs. This paper describes the resulting changes to user interaction and workflow and provides a discussion on the use case scenarios that could best utilize these updates. 
  • Wang, Z. W. (2020). Real design practice, real design computation. International Journal of Architectural Computinghttps://doi.org/10.1177/1478077120958165 
    • This article presents several case studies in order to investigate the use of computational, design-oriented services in the architecture industry. The article examines the differing opinions on the use of computation in the field, describes the experience of a design firm and the implications of this case study on the industry. The purpose of this paper is to address the gap between theoretical implications of computational design and the realities of the architecture business. 
  • Yigitcanlar, T., et al. (2019). Can cities become smart without being sustainable? A systematic review of the literature. Sustainable Cities and Society45, 348-365. https://doi.org/10.1016/j.scs.2018.11.033
    • This article investigates the question of whether smart city policy and sustainability outcomes are entwined, by reviewing literature that asserts a limitation on the ability of smart cities to achieve sustainability. The authors argue that cities cannot be smart unless they are designed to be sustainable. 
  • Zhang, G., et al. (2021). A cautionary tale about the impact of AI on human design teams. Design Studies72, 100990. https://doi.org/10.1016/j.destud.2021.100990.
    • This paper explores the integration of AI technologies into human design teams. The authors found that AI boosted the initial performances of low-performing teams but decreased the performance of high-performing teams.

Chapter 31. AI and the Global South: Designing for Other Worlds (Chinmayi Arun)⬆︎

  • Ajunwa, I. (2020).* The paradox of automation as anti-bias intervention. Cardozo Law Review, 41(5), 1671-1742. 
    • This article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The article observes the lack of legal frameworks that take into account the emerging technological capabilities of hiring tools, which make it difficult to detect disparate impact, and argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. The article also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision.
  • Birhane, A. (2020). Algorithmic colonization of Africa. Scripted, 17(2), 389–409. https://doi.org/10.2966/scrip.170220.389
    • In this article, Birhane compares large technology corporations from the “West” with traditional colonialism. They argue that while early forms of colonialism depended on national governments, algorithmic colonization is now driven by corporations. The previous violent forms of colonialism have now been replaced with technological solutionism, which threatens to undermine the development efforts of African countries.
  • Casilli, A. A. (2017). Digital labor studies go global: Toward a digital decolonial turn. International Journal of Communication, 11, 3934–3954. https://ijoc.org/index.php/ijoc/article/view/6349
    • This article argues that the global division of digital labour, where most online platforms workers are located in developing countries and most employers are situated in advanced economies, should not be equated with the term “colonialism,” which is meant to describe the efforts of colonial empires to dominate other peoples. The author argues that “coloniality,” a term that originated in Latin American decolonial thinking, describes the power relations remaining in post-colonial societies and is, therefore, more applicable to the current global division of digital labour.
  • Couldry, N., & Mejias, U. A. (2019).* Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media20(4), 336-349.
    • This article proposes that the data relations process is best understood through the history of colonialism. The article proposes that data relations enact a new form of data colonialism, normalizing the exploitation of human beings through data, just as historic colonialism appropriated territory and resources and ruled subjects for profit. The article further argues that data colonialism paves the way for a new stage of capitalism whose outlines can only be glimpsed: the capitalization of life without limit.
  • Couldry, N., & Mejias, U. (2019). Making data colonialism liveable: How might data’s social order be regulated? Internet Policy Review8(2). https://doi.org/ 10.14763/2019.2.1411
    • This paper argues that while the modes, intensities, scales, and contexts of dispossession have changed, the underlying drive of today’s data colonialism remains the same: to acquire “territory” and resources from which economic value can be extracted by capital. The paper further asserts that injustices embedded in this system need to be made “liveable” through a new legal and regulatory order.
  • Crawford, K., & Joler, V. (2018). Anatomy of an AI system. https://anatomyof.ai/img/ai-anatomy-publication.pdf
    • In this online paper, the authors analyze the labour and natural resources necessary for the development of artificial intelligence using the Marxian dialectic of economic subject and object. They identify three moments in this process: creating devices to support AI technologies, the internet infrastructures that collect the data necessary for AI, and the disposal of these devices.
  • Crawford, K. (2021). The atlas of AI. Yale University Press.
    • In this book, Crawford argues that artificial intelligence is an extractivist technology. The author describes how AI requires vast amounts of natural resources and an extraordinary amount of labour, largely from workers in precarious conditions, to operate while harvesting data from millions of individuals worldwide. The book argues that, far from being an objective or neutral technology, AI currently serves primarily the interests of big corporations.
  • Dirlik, A. (2007).* Global South: Predicament and promise. The Global South1(1), 12-23.
    • This essay explores possibilities for the establishment of a new global order where the Global South may play a central part. It traces the emergence of the concept of the Global South historically, with special attention to its antecedents in the popular term of the 1960s and 1970s, “Third World.” The essay suggests that while the “Third World” is no longer a viable concept geopolitically or as a political project, it may still provide inspiration for similar projects presently that may render the global South into a force in the reconfiguration of global relations. 
  • Georgiou, M. (2019). City of refuge or digital order? Refugee recognition and the digital governmentality of migration in the city. Television & New Media20(6), 600-616.
    • This article analyses the digital governmentality of the city of refuge, arguing that digital infrastructures support refugees’ new life in the European city while also normalizing the conditionality of their recognition as humans and as citizens-in-the-making. The article argues that a digital order requires a ‘performed refugeeness’ as a precondition for recognition, meaning a swift move from abject vulnerability to resilient individualism.
  • Graham, M., & Foster, C. (2017). Reconsidering the role of the digital in global production networks. Global Networks, 17(1), 68–88. https://doi.org/10.1111/glob.12142
    • This paper proposes an update to the literature on global production networks (GPN) to explain the integration of digital information with communication technologies in global production. The authors review three main categories of GPN literature: embeddedness, value, and networks. They propose expanding the GPN literature to encompass network diversity and infrastructures, digitally-driven shifts in governance, and the power of non-human actors. 
  • Graham, M., et al. (2018). Could online gig work drive development in lower-income countries? http://fowigs.net/could-online-gig-work-drive-development-in-lower-income-countries/
    • In this policy report, researchers from the Oxford Internet Institute analyze potential impact of online gig work on international development. The authors argue that regulating the international labour market is challenging. However, without any sort of regulation, the market, as it is, creates precarious forms of employment that could produce harm, especially for individuals from vulnerable populations. The authors propose targeting regulations for the handful of countries that request this type of labour, usually countries with advanced economies. 
  • Hagerty, A., & Rubinov, I. (2019). Global AI ethics: A review of the social impacts and ethical implications of artificial intelligence. arXiv preprint arXiv:1907.07892
    • This article calls for rigorous ethnographic research to better understand the social impacts of AI around the world. Global, on-the-ground research is particularly critical to identify AI systems that may amplify social inequality in order to mitigate potential harms. The article argues that a deeper understanding of the social impacts of AI in diverse social settings is a necessary precursor to the development, implementation, and monitoring of responsible and beneficial AI technologies, and forms the basis for meaningful regulation of these technologies.
  • Hicks, J. (2020). Digital ID capitalism: How emerging economies are re-inventing digital capitalism. Contemporary Politics. https://doi.org/10.1080/13569775.2020.1751377
    • This article adds to the literature on digital capitalisms by introducing a new state-led model called ‘digital ID capitalism’. Describing how the system works in India, the article explains how businesses make money from the personal data collected and draws some of its elements into traditional political economy concerns with the relationships between state, business, and labor. 
  • Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & Class60(4), 3-26.
    • This article proposes a conceptual framework of how the United States is reinventing colonialism in the Global South through the domination of digital technology. Using South Africa as a case study, it argues that US multinationals exercise imperial control at the architecture level of the digital ecosystem: software, hardware and network connectivity, which then gives rise to related forms of domination. 
  • Madianou, M. (2019). Technocolonialism: Digital innovation and data practices in the humanitarian response to refugee crises. Social Media and Society5(3), 1-13.
    • This article introduces the concept of technocolonialism to capture how the convergence of digital developments with humanitarian structures and market forces reinvigorates and reshapes colonial relationships of dependency. The article argues that the concept of technocolonialism shifts the attention to the constitutive role that data and digital innovation play in entrenching power asymmetries between refugees and aid agencies and ultimately inequalities in the global context. 
  • Madianou, M. (2019). The biometric assemblage: Surveillance, experimentation, profit, and the measuring of refugee bodies. Television & New Media20(6), 581-599.
    • This article analyzes biometrics, artificial intelligence (AI), and blockchain as part of a technological assemblage, which the author terms ‘the biometric assemblage.’ The article argues that the biometric assemblage accentuates asymmetries between refugees and humanitarian agencies and ultimately entrenches inequalities in a global context.
  • Mahler, A. G. (2017).* Beyond the colour curtain. In K. Bystrom & J. R. Slaughter (Eds.), The Global South Atlantic (pp. 99-123). Fordham University Press.
    • This essay traces the roots of the contemporary notion of the Global South to the ideology of an influential but largely forgotten Cold War alliance of liberation movements from Africa, Asia, and Latin American called the Tricontinental. The essay argues that tricontinentalism- the ideology disseminated among the international radical Left through the Tricontinental’s expansive cultural production – revised a specifically black Atlantic resistant subjectivity into a global vision of subaltern resistance that is resurfacing in contemporary horizontalist approaches to cultural criticism such as the global south. In this way, the essay proposes the Global South Atlantic as a particularly useful paradigm that not only inherently recognizes the black Atlantic foundations of the Global South but also calls contemporary solidarity politics into accountability to these intellectual roots. 
  • Maitra, S. (2020). Artificial intelligence and indigenous perspectives: Protecting and empowering intelligent human beings. In AIES 2020 – Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 320–326. Association for Computing Machinery. https://doi.org/10.1145/3375627.3375845
    • This paper discusses the prospect of humanity achieving Artificial General Intelligence. The author argues that Indigenous epistemologies provide frameworks to understand the relationship between humans and non-human agents, notably in the context of problems related to value alignment. The author provides the example of Hawaiian epistemologies, which provide relational frameworks for our relationship with AI, and Lakota rituals, which include the notion of the non-human soul bearer.
  • Milan, S., & Treré, E. (2019).* Big Data from the South(s): Beyond data universalism. Television & New Media20(4), 319-335.
    • This article introduces the tenets of a theory of datafication, calling for a de-Westernization of critical data studies, in view of promoting a reparation to the cognitive injustice that fails to recognize non-mainstream ways of knowing the world through data. It situates the “Big Data from the South” research agenda as an epistemological, ontological, and ethical program and outlines five conceptual operations to shape this agenda. 
  • Mohamed, S., et al. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(4), 659–684. https://doi.org/10.1007/s13347-020-00405-8
    • This paper presents an overview of several decolonial theories, studying the historical forces behind current power relations and the situation of postcolonial societies. The paper analyzes how artificial intelligence algorithms perpetuate some of these power relations at the expense of marginalized communities and to the benefit of corporations and their interests.
  • Ricaurte, P. (2019).* Data epistemologies, the coloniality of power, and resistance. Television & New Media20(4), 350-365.
    • This article develops a theoretical model to analyze the coloniality of power through data and explores the multiple dimensions of coloniality as a framework for identifying ways of resisting data colonization. This article further suggests possible alternative data epistemologies that are respectful of populations, cultural diversity, and environments.
  • Richardson, R., et al. (2019).* Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online, 192(94), 192–233.
    • In this research paper, the authors analyze thirteen jurisdictions that have used or developed predictive policing tools while under government commission investigations or federal court-monitored settlements, consent decrees, or memoranda of an agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, the authors examine the link between unlawful and biased police practices and the data available to train or implement these systems. The authors argue that deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed or unlawful predictions, which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. 
  • Santos, B. D. S. (2016).* Epistemologies of the South and the future. From the European South: A Transdisciplinary Journal of Postcolonial Humanities, 1, 17-29. http://europeansouth.postcolonialitalia.it/journal/2016-1/3.2016-1.Santos.pdf
    • This article puts forward epistemologies of the South as resting on the idea that current theoretical thinking in the global North has been based on the idea of an abyssal line. The article proposes a definition of ‘epistemologies of the South’ as a crucial epistemological transformation is required in order to reinvent social emancipation on a global scale, evoking plural forms of emancipation not simply based on a Western understanding of the world. 
  • Segura, M. S., & Waisbord, S. (2019). Between data capitalism and data citizenship. Television & New Media20(4), 412-419.
    • This article argues that datafication and the opposition to datafication in the South do not develop exactly as in the North given huge political, economic, social, and technological differences in the context of the expansion of digital capitalism. The article analyzes dimensions of data activism in Latin America and discusses the Global South as the site of counter-epistemic and alternative practices, and questions whether the concept of “data colonialism” adequately captures the dynamics of the digital society in areas of well-entrenched digital divides.
  • Shokooh Valle, F. (2020). Turning fear into pleasure: Feminist resistance against online violence in the global south. Feminist Media Studies. https://doi.org/10.1080/14680777.2020.1749692
    • This essay argues that feminist strategies of contestation to online violence in the Global South embody decolonial thought by re-appropriating and fostering the right of marginalized communities to express sexual pleasure online. The essay asserts that activists problematize online violence through two main strategies: first, by anchoring themselves in a southern epistemology that makes explicit the connections between gender-based online violence and broader sociotechnical, historical, and political contexts, and, second, by using activism against online violence, including threats of violence, to advocate for novel forms of online sexual agency and pleasure. Finally, the essay describes how feminist activists reimagine a technological future that is truly emancipatory.
  • Sun, Y., & Yan, W. (2020). The power of data from the global south: Environmental civic tech and data activism in China. International Journal of Communication14(19), 2144-2162.
    • This article explores how an established environmental nongovernmental organization, the Institute of Public and Environmental Affairs (IPE), engaged in data activism around a civic tech platform in China, expanding the space for public participation. By conducting participatory observation and interviews, along with document analysis, the authors describe three modes of data activism that represent different mechanisms of civic oversight in the environmental sphere.
  • Taylor, L., & Broeders, D. (2015).* In the name of Development: Power, profit and the datafication of the global South. Geoforum64, 229-237. http://dx.doi.org/10.1016/j.geoforum.2015.07.002
    • This article identifies two trends in the datafication process underway in low- and middle-income countries (LMICs): first, the empowerment of public–private partnerships around datafication in LMICs and the consequently growing agency of corporations as development actors. Second, the way commercially generated big data is becoming the foundation for country-level ‘data doubles’, i.e. digital representations of social phenomena and/or territories that are created in parallel with, and sometimes in lieu of, national data and statistics. The article explores the resulting shift from legibility to visibility and the implications of seeing development interventions as a by-product of large-scale processes of informational capitalism.
  • West, S. M., et al. (2019).* Discriminating systems: Gender, race, and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html
    • This report argues that there is a diversity crisis in the artificial intelligence (AI) industry, and that a profound shift is needed to address this crisis. It puts forward eight recommendations for improving workplace diversity and four recommendations for addressing bias and discrimination in AI systems. 
  • Zhang, W., & Neyazi, T. A. (2020). Communication and technology theories from the South: the cases of China and India. Annals of the International Communication Association44(1), 34-49.
    • Using China and India as two cases, this paper reviews the descriptions of communication technology in the two countries and compares the descriptions. Through such comparisons, the paper concludes that the communication technology studies on China and India provide three theoretical insights: firstly, the state-society relationship shapes communication technology; secondly, the increasing pluralization or hybridity of cyberspace shapes how communication technology is used; and lastly, it is the quest for finding oneself (or selves) in a Chinese/Indian modernity that could provide references to other contexts.

Chapter 32. Perspectives and Approaches in AI Ethics: East Asia (Danit Gal)⬆︎

  • BAAI. (2019, May 28).* Beijing AI Principles. https://baip.baai.ac.cn/en?fbclid=IwAR2HtIRKJxxy9Q1Y953H-2pMHl_bIr8pcsIxho93BtZY-FPH39vV9v9B2eY
    • This document provides context of the principles proposed as guidelines and initiatives for the research, development, use, governance and long-term planning of AI in Beijing, China. 
  • Carrillo, M. R. (2020). Artificial intelligence: From ethics to law. Telecommunications Policy. https://doi.org/10.1016/j.telpol.2020.101937
    • This paper discusses the main normative and ethical challenges imposed by the advancement of artificial intelligence. In particular, the effect on law and ethics created by increasing connectivity and symbiotic interaction among humans and intelligent machines. 
  • Chen, B., et al. (2020). Containing COVID-19 in China: AI and the robotic restructuring of future cities. Dialogues in Human Geography10(2), 238-241. https://doi.org/10.1177/2043820620934267
    • Motivated by the COVID-19 pandemic, this paper explores China’s use of robots and AI to ensure physical distancing and enforce quarantines in its cities. The authors also provide discussion on the future impact of such autonomous systems on urban bio-(in)security. 
  • China Institute for Science and Technology Policy at Tsinghua University. (2018).* China AI Development Report 2018. http://www.sppm.tsinghua.edu.cn/eWebEditor/UploadFile/China_AI_development_report_2018.pdf
    • This document, published by the China Institute for Science and Technology Policy (CISTP) within Tsinghua University in Beijing China aims to provide a comprehensive picture of AI development in China and in the world at large with a view to increasing public awareness, promoting the AI industry development, and serving policymaking. 
  • Dekle, R. (2020). Robots and industrial labor: Evidence from Japan. Journal of the Japanese and International Economies, 58, 101108.  https://doi.org/10.1016/j.jjie.2020.101108
    • This study explores the impact of robots on the Japanese labor force. The authors found that robots have a negative impact in relation to human task displacement, a positive effect on industry productivity, and an overall positive macroeconomic impact on Japanese labor demands. 
  • Ema A. (2018).* EADv2 Regional Reports on A/IS Ethics: Japan. The Ethics committee of the Japanese Society for Artificial Intelligence. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/eadv2_regional_report.pdf
    • This document, compiled by the Institute of Electrical and Electronics Engineers (IEEE), consists of reports describing regional attitudes and actions in the field of artificial intelligence. 
  • Frumer, Y. (2018). Cognition and emotions in Japanese humanoid robotics. History and Technology, 34(2), 157-183.
    • This paper analyses the creation of artificial humanoid robots, the phenomenon of the ‘uncanny valley’, and current research to overcome the ‘uncanny’ nature of humanoid robots, to argue that development of the field of humanoid robotics in Japan was driven by concern with human emotion and cognition, and shaped by Japanese roboticists’ own associations with the social and intellectual environments of their time.
  • Ghotbi, N., et al. (2021). Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan. AI & Society, 1-8. https://doi.org/10.1007/s00146-021-01168-2
    • The authors conducted a survey to understand college students’ perceptions of artificial intelligence at an international university in Japan. They found that the most significant ethical issue for students was the impact of AI on unemployment. The second most pressing ethical issue was AI’s impact on human behaviour. The authors use the results of the study to call on Japan’s policymakers to consider ways to reduce the negative impact of AI on employment and promote greater emotional intelligence in the development of AI systems. 
  • Hwang, H., & Park, M. H. (2020). The threat of AI and our response: The AI Charter of Ethics in South Korea. Asian Journal of Innovation and Policy, 9(1), 56-78. https://doi.org/10.7545/ajip.2020.9.1.056
    • This article describes Korea’s response to the risks created by the use of AI based on the AI Charter of Ethics (AICE) protocol. This paper identifies seven threats that AI poses for Korean society, sorted into three categories: AI’s value judgement, malicious use of AI, and human alienation. The authors also evaluate responses to these threats which they categorize using three themes: protection of social values, AI control, and fostering digital citizenship. The authors found a gap in the Korean response to AI when it comes to the threat of AI taking over human occupations and the use of AI weaponry for military power. 
  • Intelligent Robots Development and Distribution Promotion Act. (Act No. 9014, Mar. 28, 2008, Amended by Act No. 9161, Dec. 19, 2008).* Statutes of the Republic of Korea. http://elaw.klri.re.kr/eng_mobile/viewer.do?hseq=17399&type=sogan&key=13
    • This statue describes and dictates the South Korean outlook on artificial intelligence and sets in place guidelines on future development in the field of AI. 
  • Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
    • This paper explores the debate concerning what constitutes “ethical AI” and which ethical requirements, technical standards and best practices are needed for its realization. The authors present their findings that there is a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy). However, there is substantive divergence in relation to how these principles are interpreted; why they are deemed important; what issue, domain or actors they pertain to; and how they should be implemented.
  • Knox, J. (2020). Artificial intelligence and education in China. Learning, Media and Technology45(3), 298-311. https://doi.org/10.1080/17439884.2020.1754236
    • This article analyzes the role of government policy and private sector enterprises roles on AI in the context of education in China. The authors found that while the policy of the central government still maintains a significant influence on the use of AI in education, there are favourable conditions for the private sector to develop applications and begin to dominate AI education markets
  • Kovacic, M. (2018). The making of national robot history in Japan: Monozukuri, enculturation and cultural lineage of robots. Critical Asian Studies, 50(4), 572-590.
    • This article discusses Japanese corporate and governmental strategies and mechanisms that are shaping a national robot culture through establishing robot “lineages” and a national robot history which can have significant implications for both humans and robots.
  • Lee, K. J., & Kim, E. Y. (2020). The role and effect of artificial Intelligence (AI) on the platform service innovation: The Case Study of Kakao in Korea. Knowledge Management Research21(1), 175-195. https://doi.org/10.15813/kmr.2020.21.1.010
    • This paper investigates the use of AI in platform services and its impact on business performance in Korea. The authors conducted an empirical study of the Kakao group, focusing on its three subsidiary platforms: the chatbot agent of Kakao Bank, the smart call service of Kakao Taxi, and the music recommendation system of Kakao Mellon. They found that the use of these AI driven platform services has provided for a significant decrease in transaction costs and personalization services.
  • Obayashi, K. et al. (2020). Can connected technologies improve sleep quality and safety of older adults and care-givers? An evaluation study of sleep monitors and communicative robots at a residential care home in Japan. Technology in Society, 62, 101318. https://doi.org/10.1016/j.techsoc.2020.101318
    • This study explores the use of an assistive technology that is connected to a communicative robot to monitor the sleep quality and safety of older adults. The system was then evaluated in a study with both senior adults and caregivers at a nursing home in Japan.  
  • Otsuki, G. J. (2019). Frame, game, and circuit: Truth and the human in Japanese human-machine interface research. Ethnos. https://doi.org/10.1080/00141844.2019.1686047
    • This essay tracks the ‘human’ emergent in human-centred technologies (HCTs) in Japan and argues that all HCTs are systems of information and the right machine can approach humanity enough to fulfil even the most human of responsibilities.
  • Park, Y. R., & Shin, S. Y. (2017). Status and direction of healthcare data in Korea for artificial intelligence. Hanyang Medical Reviews, 37(2), 86-92.
    • This paper argues that in the context of medical AI, the general approach that accumulates massive amounts of data based on existing big data concepts cannot provide meaningful results in the healthcare field. Thus, the authors argue that well-curated data is required in order to provide a successful combination of AI and medical care. 
  • Peters, D., et al. (2020). Responsible AI—two frameworks for ethical design practice. IEEE Transactions on Technology and Society, 1(1), 34-47.
    • This paper presents two complementary frameworks for integrating ethical analysis into engineering practice to address the challenge posed by unintended consequences of artificial intelligence (AI), compounded by the lack of an anticipatory process for attending to ethical impact within professional practice.
  • Roberts, H., et al. The Chinese approach to artificial intelligence: An analysis of policy and regulation. SSRN. http://dx.doi.org/10.2139/ssrn.3469784
    • Through a compilation of debates and analyses of Chinese policy documents, this paper investigates the socio-political background and policy debates that are shaping China’s AI strategy. There is a focus on the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use.
  • Robertson, J. (2018).* Robo sapiens japanicus: Robots, gender, family, and the Japanese nation. University of California Press.
    • Through an analysis of press releases and public relations videos, this book provides academic discourse of human-robot relations in Japan, and ultimately argues that robots in Japan —humanoids, androids, and animaloids—are “imagineered” in ways that reinforce the conventional sex/gender system and political-economic status quo.
  • Sethu, S. G. (2019). The inevitability of an international regulatory framework for artificial intelligence. In 2019 International Conference on Automation, Computational and Technology Management (ICACTM) (pp. 367-372). IEEE. https://doi.org/10.1109/ICACTM.2019.8776819
    • This paper highlights issues surrounding the manufacture and functioning of autonomous weapons, specifically in the Lethal Autonomous Weapons System (LAWS) as a reason for the need to establish the need for an international regulatory framework for artificial intelligence.
  • Sparrow, R. (2019). Robotics has a race problem. Science, Technology, & Human Values, 45(3), 538-560.
    • This article presents research that shows people are inclined to attribute race to humanoid robots, resulting in an ethical problem that designers of social robots must confront. Thus, the author argues that the only way engineers might avoid this dilemma is to design and manufacture robots to which people will struggle to attribute race, however, this would require rethinking the relationship between robots and “the social,” which sits at the heart of the project of social robotics. 
  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019).* Classical Ethics in A/IS. In Ethically Aligned Design (pp. 36-67). https://standards.ieee.org/industry-connections/ec/autonomous-systems.html
    • This document released by the Institute of Electrical and Electronics Engineers (IEEE) is a crowdsourced global treatise for ethical development in Artificial and Intelligent Systems. The chapter Classical Ethics in A/IS draws from classical ethical principles to outline guidelines and limitations on AI systems. 
  • Weng, Y. H., et al. (2019). The religious impacts of Taoism on ethically aligned design in HRI. International Journal of Social Robotics, 11(5), 829-839.
    • This paper explores the increasing importance of assessment of robot application and employment in different countries with different cultural backgrounds and focuses on the intersection of religion and automation. This paper aims to analyze what impacts Taoist religion may have on the use of Ethically Aligned Design in future human–robot interaction.
  • Wu, F., et al. (2020). Towards a new generation of artificial intelligence in China. Nature Machine Intelligence, 2(6), 312-316. https://doi.org/10.1038/s42256-020-0183-4
    • This article introduces the New Generation Artificial Intelligence Development Plan of China (2015–2030) which outlines the country’s strategy for using technology in science and education. The plan also identifies the challenges in talent retainment, fundamental research advancement, and ethical implications. The authors assert that the plan is intended as a blueprint for a future AI ecosystem in China. 
  • Wu, W., et al. (2020). Ethical principles and governance technology development of AI in China. Engineering6(3), 302-309. https://doi.org/10.1016/j.eng.2019.12.015
    • This article provides a survey on the efforts towards the development of AI ethics and governance in China. It highlights the preliminary outcomes of these efforts and describes the major research challenges that lay ahead in AI governance.
  • Yang, X. (2019). Accelerated move for AI education in China. ECNU Review of Education2(3), 347-352. https://doi.org/10.1177/2096531119878590
    • This paper reviews several key policies put forward by the Chinese government in order to analyze recent efforts to promote education on AI. The author found that AI education is already prevalent in many areas of the education system, starting at the elementary level and becoming more robust at the senior level in civic education curriculums.
  • Yoo, J. (2015).* Results and outlooks of robot education in Republic of Korea. Procedia-Social and Behavioral Sciences, 176, 251-254. https://doi.org/10.1016/j.sbspro.2015.01.468
    • This paper explores the consequences of the introduction of robotics into the South Korean education system from elementary through to high school, compared to the later introduction at post-secondary level in the United States or Japan. The author then evaluates the results of this policy in context of future prospects in South Korea, and argues that this early introduction gives South Korea a head start in the robotics industry.
  • Zeng, Y., Lu, E., & Huangfu, C. (2018).* Linking artificial intelligence principles. arXiv preprint arXiv:1812.04814
    • This paper argues that although Artificial Intelligence principles define social and ethical considerations to develop future AI, there exist multiple versions of AI principles, with different considerations covering different perspectives and making different emphasis. Thus, the authors propose Linking Artificial Intelligence Principles (LAIP) an effort and platform for linking and analyzing different Artificial Intelligence Principles.
  • Zhang, B. T. (2016). Humans and machines in the evolution of AI in Korea. AI Magazine, 37(2), 108-112.
    • This article recounts the evolution of AI research in Korea, and describes recent activities in AI, along with governmental funding circumstances and industrial interest. 

Chapter 33. Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion (Nagla Rizk)⬆︎

  • Access Partnership. (2018).* Artificial intelligence for Africa: An opportunity for growth, development, and democratisation. https://www.accesspartnership.com/artificial-intelligence-for-africa-an-opportunity-for-growth-development-and-democratisation/
    • This report argues that the development of artificial intelligence technologies can solve problems that impact Sub-Saharan African countries, providing growth and development in areas such as agriculture, healthcare, and public service. 
  • Ahmed, S. M. (2019). Artificial intelligence in Saudi Arabia: Leveraging entrepreneurship in the Arab markets. In 2019 Amity International Conference on Artificial Intelligence (AICAI) (pp. 394-398). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/AICAI.2019.8701348
    • This paper focuses on efforts toward economic diversification in Saudi Arabia. AI has been embraced by many established industry sectors in Saudi Arabia, such as banking and finance, but the author suggests AI is poised for growth in the Saudi start-up ecosystem. Ahmed argues that fostering AI entrepreneurship will promote economic diversification, create wealth, and catalyze social change in Saudi Arabia and the Middle East.
  • AI Now Institute, New York University. (2018).* AI Now Report 2018. https://ainowinstitute.org/AI_Now_2018_Report.pdf
    • The 2018 AI Now Institute report focuses on five key issues. First, the accountability gap in AI, which favours AI producers rather than the people these technologies are used against. Second, how AI is used to increase surveillance, such as the increased use of facial recognition. Third, government use of emerging technology without pre-existing accountability frameworks. Fourth, the lack of regulation of AI experimentation on human subjects. Fifth, the failure of current solutions in addressing fairness, bias, and discrimination. 
  • Al-Din, S. G. (2021). Implications of the Fourth industrial revolution on Women in Information and Communications Technology: In-depth analysis on the Future of work. Egyptian National Council for Women. http://en.enow.gov.eg/Report/133.pdf 
    • Al-Din flags both the economic potential and risks that are expected to accompany the automation of low-skilled labour in Egypt. They note that despite relatively high levels of access to education and health services, women are more likely to work in sectors that are at risk of automation. With this in mind, they propose building public awareness of these gendered risks among Egyptians, investing in education as well as entrepreneurship opportunities for women, and expanding the social security system with the gendered impacts of automation in mind.
  • Al-Eisawi, D. (2020). A Framework for responsible research and innovation in new technological trends towards MENA region. In 2020 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC), (pp. 1–8). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/ICE/ITMC49519.2020.9198506
    • While responsible research and innovation (RRI) has attracted significant attention in Europe, Al-Eisawi suggests that the framework requires conceptual expansion to become meaningful for research in the Middle East and North Africa (MENA) region. Drawing upon interviews with technology researchers, the author presents a grounded theory framework for RRI in the MENA region with emphases on access, education, ethics, and engagement for the promotion of innovation that is rights-respecting and inclusive in the region.
  • Al-Roubaie, A., & Alaali, M. (2020). The fourth industrial revolution: Challenges and opportunities for MENA region. In Hassanien AE., Azar A., Gaber T., Oliva D., Tolba F. (Eds.), Joint European-US Workshop on Applications of Invariance in Computer Vision (pp. 672-682). Springer. https://doi.org/10.1007/978-3-030-44289-7_63 
    • In this conference paper, Al-Roubaie & Alaali argue that the disruptions caused by artificial intelligence are deepening unemployment and inequality in the Middle East and North Africa region. They show that automation stands to deepen existing economic imbalances both between and within the region’s economies. They suggest that investments in digital government, research and development, and education should be made to promote the development of an inclusive digital economy.
  • Arezki, R., et al. (2018).* Middle East and North Africa Economic Monitor, Spring 2018: Economic Transformation. The World Bank. https://openknowledge.worldbank.org/bitstream/handle/10986/30436/9781464813672.pdf?sequence=11&isAllowed=y
    • This report examines the development and use in the Middle East and North Africa region, discussing how a digital economy that would create jobs for millions of unemployed young people could be fostered in coming years. To do this, the MENA region must move away from its focus on manufacturing exports, and instead take advantage of the region’s educated youth population, encouraging innovation and entrepreneurship. 
  • Badran, M. (2019). Bridging the gender digital divide in the Arab Region. International Development Research Centre. https://www.researchgate.net/profile/Mona-Badran-2/publication/330041688_Bridging_the_gender_digital_divide_in_the_Arab_Region/links/5c2b725b92851c22a3535465/Bridging-the-gender-digital-divide-in-the-Arab-Region.pdf 
    • This report by Badran shows that gender inequality in the Middle East and North African technology sector imposes costs on society through missed economic potential. Moreover, automation driven by artificial intelligence is most likely to impact sectors with high levels of female employment. The author argues that access to technology is a barrier to equality and that more effort should be directed towards technical mentorship and education for women already in the labour force to improve their capacity to adapt and benefit from new technologies.
  • Brynjolfsson, E., & McAfee, A. (2011).* Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Digital Frontier Press. 
    • In their book, Brynjolfsson and McAfee argue that the average human worker cannot keep up with cutting edge technologies such as AI that have the potential to take over their jobs. The implication is that poor employment prospects are not due to lack of advancements, but rather because we are being outdone by technology. 
  • Brynjolfsson, E., et al. (2017). Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics. National Bureau of Economic Research. www.nber.org/chapters/c14007.pdf
    • This article argues that although there has been many advancements in AI technology in past years, this has not been met with an increase in productivity. The authors explore four potential explanations for this apparent paradox: false hopes, statistically mismeasurement, redistribution and lags in implementation. 
  • Butcher, J., & Beridze, I. (2019). What is the state of artificial intelligence governance globally? The RUSI Journal164(5), 88-96. https://doi.org/10.1080/03071847.2019.1694260
    • Butcher and Beridze summarize current AI governance in both public and private sectors, in research organizations, and at the United Nations. They offer frameworks that can provide guidance to policy makers. 
  • Chui, M., et al. (2017).* The countries most (and least) likely to be affected by automation. Harvard Business Reviewhttps://hbr.org/2017/04/the-countries-most-and-least-likely-to-be-affected-by-automation
    • This article by Chui and colleagues examines the automation potential in 46 countries, accounting for 80% of the global workforce. They present a wide disparity in terms of automation risk among states in the Middle East and North African region with North African states like Morocco and Egypt at much higher risk than Persian Gulf states like Kuwait and Saudi Arabia.
  • Cihon, P. (2019). Standards for AI governance: International standards to enable global coordination in AI research & development. Future of Humanity Institute.
    • This report argues that the emergence of AI presents novel problems for policy design, and that a coordinated global response is necessary. Current AI standards development is heavily focused on market efficiency and addressing global concerns, but Cihon worries that this neglects further policy objectives such as creating a culture of responsibility.  
  • Cisse, M. (2018). Look to Africa to advance artificial intelligence. Nature562(7728), 461-462.
    • Cisse argues that AI technology must be developed in a broader range of locations than just Asia, North America and Europe, in order to promote diversity and combat unintended biases. Particularly, development in Africa should be prioritized, as this would not only solve the problem of lack of diversity, but also would provide Africans with access to technology that could improve the lives of citizens. 
  • Daly, A., et al. (2019). Artificial intelligence, governance and ethics: Global perspectives. The Chinese University of Hong Kong Faculty of Law Research Paper, (2019-15). https://dx.doi.org/10.2139/ssrn.3414805
    • This report provides an overview on how actors such as governments and private corporations have approached AI regulation and ethics, including regions such as China, Europe, India, and the United States, and companies such as Microsoft. 
  • Fatafta, M., & Samaro, D. (2021). Exposed and exploited: Data protection in the Middle East and North Africa. Access Now. https://apo.org.au/node/310911
    • In this report, Fatafta & Samaro explore the tensions between weak data protection regulations and the rapid adoption of data-driven surveillance technologies, which disproportionately impact marginalized populations in Jordan, Lebanon, Palestine, and Tunisia. They describe each territory’s data protection regime alongside surveillance case studies, such as the use of facial recognition in occupied Palestine. They conclude with data protection policy recommendations for states, firms, and international organizations operating in the region.
  • Giovannetti, G., & Vivoli, A. (2018). Technological Innovation: Growth without Occupation. An Overview on MENA Countries. In IEMed: Mediterranean yearbook (pp. 278-282). https://www.iemed.org/observatori/arees-danalisi/arxius-adjunts/anuari/med.2018/Technological_Innovation_Giovannetti_Vivoli_Medyearbook2018.pdf 
    • Giovannetti & Vivoli argue that the Middle East and North Africa (MENA) region is vulnerable to automation due to labour-intensive economies and low integration with the international technology sector. ‘Low-skill’ jobs occupied by women are at the greatest risk of automation as despite outperforming men in schools, women are under-represented in technical jobs. They argue that investing in the MENA region’s youth by stimulating the technology sector and strengthening social security can insulate the region from the negative impacts of automation.
  • Gordon, M. (2018) Forecasting instability: The case of the Arab spring and the limitations of socioeconomic data. Wilson Center. https://www.wilsoncenter.org/article/forecasting-instability-the-case-the-arab-spring-and-the-limitations-socioeconomic-data
    • Gordon analyzes data from the Arab Spring, arguing that these uprisings could be predicted, but not down to the exact date and time of their occurrence. He argues that similar limitations apply to predicting political and social instability. 
  • Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
    • This study investigates whether or not there is a global consensus on any ethical principles pertaining to AI. The results reveal global convergence around five principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy. 
  • Lukonga, M. I. (2018). Fintech, inclusive growth and cyber risks: Focus on the MENAP and CCA regions (IMF Working Paper No. 18/201). International Monetary Fund. https://www.imf.org/-/media/Files/Publications/WP/2018/wp18201.ashx 
    • Lukonga argues that the financial technology (fintech) industry in the Middle East, North Africa, Afghanistan, and Pakistan as well as the Caucasus and Central Asia regions can increase financial inclusion and promote more equitable economic growth in the region. There are a high number of ‘unbanked’ citizens in these regions and fintech is flexible enough to reach populations that traditional banks have not been able to. Lukonga suggests modernizing regulations to enable the expansion of fintech and promote inclusive growth.
  • Rickli, J.-M. (2018). The economic, security and military implications of artificial intelligence for the Arab Gulf countries. Emirates Diplomatic Academy. https://eda.ac.ae/docs/default-source/Publications/eda-insight_ai_en.pdf 
    • In this report, the author discusses the expected impact of AI on countries in the Arab Gulf. The report notes that interventions in training and education are essential to mitigate the social impacts of automation and develop a strong AI industry. Further to this, Rickli suggests that leadership in the AI industry is essential for the maintenance of national security due to the development of autonomous weapons in addition to the rising prevalence of cyber-attacks and AI-generated disinformation.
  • Rizk, N. Y. H. & Salem, N. (2018). Open data management plan Middle East and North Africa: A guide. MENA Data Platform. 
    • This guide contains three documents developed out of the American University in Cairo. First, a background paper explores open data relating to research and development. Second is a data management plan template, made up of a set of questions that, when answered, will provide an Open Data Management plan. Third is the Solar Data Platform Open Data Management Plan, which mapped solar energy in Egypt, and acts as an example of the implementation of the template. 
  • Vernon, D. (2019). Robotics and artificial intelligence in Africa [Regional]. IEEE Robotics & Automation Magazine26(4), 131-135. https://doi.org/10.1109/MRA.2019.2946107
    • This article explores how African countries can take advantage of opportunities presented by the rise of artificial intelligence and robots, considering potential solutions to problems that are likely to emerge. Vernon argues that to take full advantage of potential growth, states should create an enabling environment for advanced research and education and that vendors should work to lower the costs of AI and robotics technology to encourage adoption by African firms.
  • World Economic Forum. (2019).* Dialogue series on new economic and social frontiers, shaping the new economy in the fourth industrial revolution. http://www3.weforum.org/docs/WEF_Dialogue_Series_on_New_Economic_and_Social_Frontiers.pdf
    • This paper examines four emerging challenges at the intersection of economics, technology, and society in the age of the Fourth Industrial Revolution. The paper addresses multiple areas of concern, such rethinking economic value and avenues for creating this value, addressing market concentration, enhancing job creation, and revising social protection. 
  • World Economic Forum. (2017).* The future of jobs and skills in the Middle East and North Africa: Preparing the region for the fourth industrial revolution. https://www.weforum.org/reports/the-future-of-jobs-and-skills-in-the-middle-east-and-north-africa-preparing-the-region-for-the-fourth-industrial-revolution
    • This report asserts that it is vital that the MENA region invest in education to prepare its young population for the contemporary labour market. It presents a call to action to MENA region leaders to ensure that youth are able to fully participate in the global economy. 
  • Yamakami, T. (2019). From ivory tower to democratization and industrialization: A landscape view of real-world adaptation of artificial intelligence. In International Conference on Network-Based Information Systems (pp. 200-211). https://doi.org/10.1007/978-3-030-29029-0_19
    • The author examines the concept of democratization and industrialisation of deep learning as a new landscape view for artificial intelligence. They go on the describe a three-stage model of interaction between a social community and technology.

Chapter 34. Europe’s Struggle to Set Global AI Standards (Andrea Renda)⬆︎

  • Annoni, A., et al. (2018).* Artificial intelligence: A European perspective. Joint Research Centre, European Commission. https://doi.org/10.2760/11251
    • This extensive report investigates the multitude of practical, technical, legal and ethical issues that the EU must consider when developing laws, policies and regulations regarding AI, data protection and cybersecurity. The researchers propose that the EU must take a unified approach to encourage developments in AI that are socially driven, responsible, ethical and match the core values of civil society. 
  • Antonov, A., & Kerikmäe, T. (2020). Trustworthy AI as a future driver for competitiveness and social change in the EU. In D. Ramiro Troitiño, T. Kerikmäe, R. de la Guardia, & G. Pérez Sánchez (Eds.), The EU in the 21st century (pp.135-154). Springer. https://doi.org/10.1007/978-3-030-38399-2_9
    • This article examines the ethical and legal effects of AI technologies that have been promoted and encouraged by the EU in recent years. The authors consider key initiatives in AI governance and seek to identify the main challenges that the EU will face in their goal to become a global leader in the development of trustworthy AI technology. 
  • Calzada, I. (2019). Technological sovereignty: Protecting citizens’ digital rights in the AI-driven and post-GDPR algorithmic and city-regional European realm. Regions eZine. https://ssrn.com/abstract=3415889
    • This article explains how the state of AI and data protection regulation in the EU affect citizenship. The author takes a comparative approach and argues that in the EU, citizens are considered to be decision-makers rather than data providers (as is the case in the US and China). He argues that Europe is most likely to adopt a form of ‘technological humanism’ by offering strategic visions of regional AI networks in which governments maintain technological sovereignty to protect their citizens’ digital rights. 
  • Carriço, G. (2018). The EU and artificial intelligence: A human-centred perspective. European View17(1), 29-36. https://doi.org/10.1177/1781685818764821
    • This article considers the costs and benefits of AI implementation in the EU context and argues in support of developing the EU into a global leader of AI innovation. The author argues for a human-centric focus on AI development and emphasizes the use of AI to solve the world’s most challenging societal problems while minimizing risk. The author provides policy recommendations for EU adoption to realize this goal.
  • Cath, C., et al. (2018).* Artificial intelligence and the ‘good society’: the US, EU, and UK approach. Science and Engineering Ethics24(2), 505-528. https://doi.org/10.1007/s11948-017-9901-7
    • This paper provides a comparative analysis of policy plans proposed by US, UK and EU governments concerning the integration of AI in society. The authors argue in favor of ‘the good AI society’, and they suggest that although short-term ethical solutions are important, state actors in the US, EU and UK must consider long-term visions and strategies that best promote human flourishing and dignity in the AI context. 
  • Cho, J. H., et al. (2016). Metrics and measurement of trustworthy systems. In MILCOM 2016-2016 IEEE Military Communications Conference (pp. 1237-1242). Institute of Electrical and Electronics Engineers.
    • This study develops a trustworthiness metric that incorporates different factors such as hardware, software, network, and human factors that affect the trustworthiness of computer systems. It focuses on three submetrics: trust, resilience, and agility. Finally, the authors propose an ontology with sub-ontologies to enable measurement of these submetrics and the general trustworthiness of the computer system. 
  • Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review, 16(2), 18-84.
    • This article argues that the right to an explanation as part of General Data Protection Regulation (GDPR) is unlikely to adequately remedy the potential for harm created by the use of algorithms. The authors discuss the gap between the legal right to an explanation and the explanations that machine learning models can provide. Finally, while the right to an explanation may not fulfill its intended goals, the authors discuss how other aspects of GDPR such as the right to be forgotten and privacy by design have greater potential to help make algorithms more responsible, explainable, and human-centric.
  • European Commission. (2018).* Coordinated plan on artificial intelligence. https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX:52018DC0795
    • This communication from the European Commission proposes a plan aimed at coordinating the integration, facilitation and development of AI across the EU. The report suggests that in order to become a world leader in the AI industry, the EU must increase investments in AI, prepare for socio-economic change and develop an ethical and legal framework that ensures AI development is human-centric. 
  • European Commission & High Level Expert Group on AI. (2019).* Ethics guidelines for trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation              
    • This report proposes seven ethical principles of trustworthy AI which aim to promote an accountable, human-centric AI for the EU and global contexts. It defines trustworthy AI as that which operates within the law, adheres to ethical principles and is robust such that no unintentional harms are inflicted on society. The report proposes that policymakers must work to ensure that each of these components are simultaneously met. 
  • European Commission & High Level Expert Group on AI. (2019).* Policy and investment recommendations for trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence
    • This report follows and supports the European Commission’s guidelines for trustworthy AI and provides thirty-three recommendations to maximize the sustainability, growth and competitiveness of trustworthy AI in the EU. The report stresses the role of EU institutions and member states as critical to the implementation of sound AI governance that promotes benefits and minimizes harms to the public. Suggestions are forwarded with regards to data protection, skills and education, regulation and funding of AI technologies. 
  • European Commission. (2018).* Working document on liability for emerging digital technologies. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52018SC0137&from=en
    • This document considers how opportunities and investments in AI can be stimulated by adapting and implementing clear legal frameworks that benefit AI innovators and consumers. The report focusses on the liability challenges in AI and digital technology contexts. The commission calls for an examination of existing safety and liability rules at EU and national levels to determine whether they maintain the appropriate legal certainty required for AI innovation to succeed. 
  • European Group on Ethics in Science and New Technologies. (2018).* Statement on artificial intelligence, robotics and ‘autonomous’ systems. https://doi.org/10.2777/531856
    • This statement by the European Group on Ethics considers the legal, ethical, moral and societal questions posed by autonomous technologies, and calls for a more collective and inclusive approach among EU member-states. The report proposes a set of ethical imperatives for autonomous systems that is based on the EU treaties and charters of fundamental rights. 
  • Fazelpour, S., & Lipton, Z. C. (2020). Algorithmic fairness from a non-ideal perspective. In A. Markham, J. Powles, T. Walsh, & A. L. Washington (Eds.), Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 57-63). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375828
    • This paper examines the statistical parity definitions of fairness in machine learning from the ideal and non-ideal perspectives of political philosophy. They show there is a connection between ongoing issues in the fair machine learning community and the broader issues that are faced by the ideal perspective community. 
  • Floridi, L. (2019). Establishing the rules for building trustworthy AI. Nature Machine Intelligence, 1(6), 261-262. https://doi.org/10.1038/s42256-019-0055-y
    • This commentary discusses the benefits of the EU’s seven ethical principles for trustworthy AI. The author provides examples of how criticisms based on the minimal impact of the principles can be countered. They discuss how the guidelines are robust and provide a strong benchmark of what society and regulators should expect from a trustworthy AI system. 
  • Floridi, L. (2019).* Establishing the rules for building trustworthy AI. Nature Machine Intelligence1(6), 261-262. https://doi.org/10.1007/s11023-018-9482-5
    • This article provides a defense of the ethical guidelines proposed by the European Commission’s report on trustworthy AI on the grounds that the guidelines establish a benchmark for which responsible design and international support of human-centric AI solutions can be evaluated. 
  • Floridi, L. (2018). Soft ethics, the governance of the digital and the general data protection regulation. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2133). https://doi.org/10.1007/s13347-018-0303-9
    • This article considers the challenges of digital governance and provides a framework of ‘hard’ and ‘soft’ ethics as they relate to digital legislation in the EU. The author then provides an analysis of how this ethical framework works with the development of new, and the adaptation of old, regulation and legislation to assist in digital governance.
  • Floridi, L., et al. (2018).* AI4People white paper: Twenty recommendations for an ethical framework for a good AI society. Minds and Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5
    • This article reports the results of the ‘AI4People’ initiative that was designed to formulate an ideal of the ‘good society’ in an AI context. The report analyses the risks and opportunities of societal AI integration and proposes five ethical principles: four of which are drawn from the applied ethics field of bioethics. The report also offers twenty additional recommendations for policy makers which they believe if adopted, would establish a ‘good AI society’. 
  • Garg, S., et al. (2020). Formalizing data deletion in the context of the right to be forgotten. In A. Canteaut & Y. Ishai (Eds.), Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques (pp. 373-402). Springer. https://doi.org/10.1007/978-3-030-45724-2_13
    • Garg et al. provide a formal model of deleting data from machine learning algorithms in accordance with the “right to be forgotten” from General Data Protection Regulation (GDPR). Using techniques from cryptography, they formalize what is possible and what regulators can expect from organizations who need to delete some or all of an individual’s data and its usage in any algorithms. 
  • Hacker, P. (2018). Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law. Common Law Market Review, 55, 1143–1186. https://ssrn.com/abstract=3164973
    • This article considers the discriminatory threat imposed by AI applications against protected groups in the EU legal context and argues that this raises complex questions for labor laws in the EU. As explained, existing anti-discrimination laws are not adapted to AI decision-making and issues of proof in the AI context. The article offers a vision of data protection and anti-discrimination law that enforces fairness in algorithmic decision-making. 
  • Humerick, M. (2018). Taking AI personally: How the EU must learn to balance the interests of personal data privacy & artificial intelligence. Santa Clara High Technology Law Journal, 34(4), 393-418. https://digitalcommons.law.scu.edu/chtlj/vol34/iss4/3
    • This article considers the influx of AI technology use and its relation to consumer data privacy and protection. The article observes how the EU maintains the most comprehensive regulation for data protection in the world but argues that such strong regulation could discourage future development and innovation of AI in the EU. Unless these issues are addressed, the authors question how future AI developments will thrive in the EU without infringing the provisions of the GDPR. 
  • Janssen, M., et al. (2020). Data governance: Organizing data for trustworthy Artificial Intelligence. Government Information Quarterly, 37(3), 101493. https://doi.org/10.1016/j.giq.2020.101493
    • This paper discusses how data governance is the foundation of trustworthy AI and provides a framework for strong data governance. The framework, which is based on 13 design principles, encourages stewardship of data, an understanding of the associated risks, models for trusted data sharing between organizations, and stewardship of the algorithms using the data.
  • Kullmann, M. (2018). Platform work, algorithmic decision-making, and EU gender equality law. International Journal of Comparative Labour Law and Industrial Relations34(1), 1-21. https://ssrn.com/abstract=3195728
    • This article considers the problems that confront workers in the digital economy and examines the role played by algorithms and their biases in employment and hiring processes. The author observes the existing gender disparity in hiring and salary decisions, and questions whether existing EU equality laws are sufficient for protection of workers when employment-related decisions are made by an algorithm. 
  • Lewis, D., et al. (2020). Global Challenges in the Standardization of Ethics for Trustworthy AI. Journal of ICT Standardization, 8(2), 123-150. https://doi.org/10.13052/jicts2245-800X.823
    • This study analyzes recent proposals for trustworthy AI from the OECD, the EU, and the IEEE according to their scope and the normative language they use. The authors propose a minimal model to define standards for trustworthy AI, which further standards can build upon. Finally, they examine the current AI standardization initiative taking place at ISO/IEC JTC1 based on their minimal model. 
  • McMillan, D., & Brown, B. (2019). Against ethical AI. In Proceedings of the Halfway to the Future Symposium 2019 (pp. 1-3).
    • This paper considers the EU guidelines on ethical and trustworthy AI to argue against the focus placed on it and other similar principles, guidelines and manifestos developed for AI. The authors consider how the AI industry and related academia are involved in ‘ethics washing’ and how the development of guidelines may not be as beneficial as previously perceived. 
  • Mercer, S. T. (2020). The limitations of European data protection as a model for global privacy regulation. AJIL Unbound114, 20-25. https://doi.org/10.1017/aju.2019.83
    • This article pushes back against the prevailing narrative that EU-style data regulations are becoming a global standard. The author argues that as of 2020, it is too early to determine whether the EU is truly the winner in the race to influence global data protection and privacy law. The author points toward the US as a potential competitor and expects the US regime to differ in its regulatory approach. 
  • Mitrou, L. (2018). Data protection, artificial intelligence and cognitive services: Is the general data protection regulation (GDPR) artificial intelligence-proof’? SSRN. http://dx.doi.org/10.2139/ssrn.3386914
    • This paper provides a detailed overview of the EU’s General Data Protection Regulation provisions in the context of recent AI technologies. The author observes the changes that AI has made to the processing of personal information and questions whether the current regulations are ‘AI-proof’ and whether new protections and rules need to be implemented in the face of advanced AI technology. 
  • Renda, A. (2019).* Artificial intelligence: Ethics, governance and policy challenges. Centre for European Policy Studies Task Force.
    • This article summarizes the results of the Centre for European Policy Studies (CEPS) report on AI in 2018. The report finds that the EU is uniquely positioned to lead the globe in its effort to develop and implement responsible and sustainable AI. The report calls upon member states to focus their agendas on leveraging this advantage to foster further development in the field. The article proposes forty-four recommendations to guide future policy and investment decisions related to the design of lawful, responsible and sustainable AI for the future.
  • Renda, A. (2018).* Ethics, algorithms and self-driving cars–a CSI of the ‘trolley problem’. CEPS Policy Insight, (2). https://ssrn.com/abstract=3131522
    • This article re-examines trolley-problem dilemma and argues against the view that it serves little use as an analogue to the automated driving context. The author engages in an investigation of the problem to reveal a number of neglected policy issues that exist within the dilemma and evade public discussion. The article also argues that current legal frameworks are unable to account for these issues and that these ethical and policy dilemmas must be accounted for in order to appropriately overhaul the relevant public policies in the European context. 
  • Shneiderman, B. (2020). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy Human-Centered AI systems. ACM Transactions on Interactive Intelligent Systems, 10(4), 1-31. https://doi.org/10.1145/3419764
    • This article provides 15 recommendations at three different levels of governance to help bridge the gap between reliable principles for human-centered AI and current governance of this technology. The three main recommendations for policymakers are: (1) use sound software engineering practices, (2) build a safety culture through business management strategies, and (3) include independent oversight to certify different trustworthiness properties. 
  • Smuha, N. A. (2019). The EU approach to ethics guidelines for trustworthy artificial intelligence. CRi-Computer Law Review International, 20(4), 97-106. https://ssrn.com/abstract=3443537
    • This article reviews the AI ethics guidelines offered by the High-Level Expert Group on AI (AI HLEG) established by the European Commission. The author explicates the context, aim and purpose of the guidelines, while considering key issues of AI ethics and governance. The author concludes by positioning the guidelines in an international context and suggests future goals. 
  • Straus, J. (2021). Artificial intelligence–Challenges and chances for Europe. European Review, 29(1), 142-158. https://doi.org/10.1017/S1062798720001106
    • This review examines the current guidelines being proposed by the EU for trustworthy AI. The authors argue that the guidelines are important, but, since the guidelines only help AI solutions become accepted by society, they are only the first step for Europe to become a leader in AI. Finally, the authors argue that the current claim from the EU, that these guidelines will make Europe an AI leader, is currently unfounded. 
  • Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science361(6404), 751-752. https://doi.org/10.1126/science.aat5991
    • This article elaborates on the benefits that AI can offer from a European perspective. The author argues that regulation is not sufficient for the development of ‘good’ AI and that ethics must play a role in the design of technologies by regulating existing regulations to balance the risks and rewards of AI capabilities. The authors argue for the critical importance of a human-centric AI with a view to solving major societal problems. 
  • Toreini, E., et al. (2020). The relationship between trust in AI and trustworthy machine learning technologies. In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 272-283). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372834
    • Toreini et al. describe a systematic approach to align notions of trust from social sciences to notions of trust for services and products that use AI. The authors start with the Ability, Benevolence, Integrity, and Predictability framework that is mapped to the four machine learning trustworthiness qualities of Fairness, Explainability, Auditability, and Safety. Finally, they discuss how their framework relates to existing AI frameworks produced by various governments.
  • Treleaven, P., et al. (2019). Algorithms: law and regulation. Computer52(2), 32-40. https://ieeexplore.ieee.org/document/8672418
    • This article offers important context for the challenges and problems with the regulation of algorithms through legal frameworks and examines their current legality. The authors focus on a variety of algorithmic applications and investigate the associated ethical, legal and technical problems of each, proposing a variety of solutions and suggestions for regulation where they deem it necessary.
  • Villaronga, E., et al. (2018). Humans forget, machines remember: Artificial intelligence and the right to be forgotten. Computer Law & Security Review34(2), 304-313. https://doi.org/10.1016/j.clsr.2017.08.007
    • This article explains ‘the right to be forgotten’ and its application to AI, transparency and EU privacy law. The authors consider legal and technical issues of data deletion requirements and regulations to conclude that it may not currently be possible to achieve the legal aims of the ‘right to be forgotten’ in the context of AI applications. 
  • Wachter, S., et al. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law7(2), 76-99. https://doi.org/10.1093/idpl/ipx005
    • This article considers the state of AI decision-making in the EU after the implementation of the GDPR which stipulated a legal mandate for a ‘right to explanation’ for all automated decisions. The authors question the existence and feasibility of such a right in current EU laws, and argue that the language in regulation boils down to a ‘right to be informed’. The authors argue that the GDPR lacks the necessary language and explicit rights to protect citizens from problematic automated decision-making. 

V. Cases & Applications

Chapter 35. Ethics of Artificial Intelligence in Transport (Bryant Walker Smith)⬆︎

  • Alawadhi, M., et al. (2020). Review and analysis of the importance of autonomous vehicles liability: A systematic literature review. International Journal of System Assurance Engineering and Management, 11, 1227-1249. https://doi.org/10.1007/s13198-020-00978-9
    • This article provides a systematic review of scholarship focused on automated vehicle (AV) liability. The authors note that the greatest emphasis on this topic is found in the fields of law and transport. They find this literature emphasizes large, developed economies. They summarize this research by concluding that liability depends on many situated elements including the level of vehicle autonomy and environmental externalities. 
  • Andersen, K. E., et al. (2017). Do we blindly trust self-driving cars? In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-robot Interaction (pp. 67-68). Association for Computing Machinery. https://doi.org/10.1145/3029798.3038428
    • This paper reports the findings of a study examining the role of trust in the adoption of artificially intelligent technologies. In a study of simulated autonomous driving scenarios, researchers observed that passengers were often too trusting of AI in cases of emergency where human intervention would have been necessary to prevent harm. 
  • Baumann, M. F., et al. (2019). Taking responsibility: A responsible research and innovation (RRI) perspective on insurance issues of semi-autonomous driving. Transportation Research Part A: Policy and Practice124, 557-572. https://doi.org/10.1016/J.TRA.2018.05.004
    • Baumann and colleagues argue that the responsible research and innovation (RRI) framework is useful for insurance companies and policymakers navigating the emergence of a market for semi-autonomous vehicles. Their approach encourages awareness among decision makers of the potential “ethical, societal, or historical” impacts of technology for stakeholders. RRI thus helps to surface and specify how insurers can and should encourage ethical innovation.
  • Bonnefon, J. F., et al. (2016). The social dilemma of autonomous vehicles. Science352(6293), 1573-1576. https://doi.org/10.1126/science.aaf2654
    • This study considers the social dilemmas that arise in autonomous driving accident scenarios and observes the effect of pre-programmed accident decisions on passenger choices in automated vehicles. In six studies, participants favored self-sacrificing utilitarian AVs, but admitted that they would not ride in them. Participants were also shown to disprove of any regulation that enforced a utilitarian regime for AV algorithms, leading researchers to conclude that vehicular fatalities could increase by forgoing safer algorithmic options. 
  • Borenstein, J., et al. (2019). Self-driving cars and engineering ethics: The need for a system-level analysis. Science and Engineering Ethics, 25, 383–398. https://doi.org/10.1007/s11948-017-0006-0
    • This paper argues that individual-level analyses are insufficient for determining the impacts of AI on human life and society. The authors argue that current ethical discussions on transportation and automation must be considered alongside a system-level analysis that considers the interaction between other vehicles and existing transportation systems. The authors observe the need for analysis of instantaneous and coordinated decisions by cars, groups of cars and other technologies, and worry that a rush toward AV’s without coordinated system-level policy and legal considerations could compromise safety and consumer autonomy.
  • Coca-Vila, I. (2018). Self-driving cars in dilemmatic situations: An approach based on the theory of justification in criminal law. Criminal Law and Philosophy12(1), 59-82. https://doi.org/10.1007/s11572-017-9411-3
    • This article considers dilemmatic decisions in the context of automated driving and draws from the logic of criminal law to argue for a deontological approach in algorithmic decision-making. The author argues against the common utilitarian logic on the grounds that the maximization of social utility cannot justify negative interference in a person’s legal sphere under a legal system that recognizes individualistic freedoms, rights and responsibilities. 
  • Contissa, G., et al. (2017). The ethical knob: Ethically-customizable automated vehicles and the law. Artificial Intelligence and Law25(3), 365-378. https://doi.org/10.1007/s10506-017-9211-z
    • This article re-considers the notion of pre-programmed AV’s by theorizing the ‘ethical knob’ which enables users to customize their vehicle and choose between various moral principles that would be acted upon by the vehicle in accident scenarios. The vehicle would thus be trusted to act on the user’s decision and the manufacturer would be expected to program the vehicle accordingly. The article subsequently addresses the evident issues of ethics, law and liability that would arise from such a proposal. 
  • Consilvio, A., et al. (2019). On exploring the potentialities of autonomous vehicles in urban spatial planning. In 2019 6th International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS) (pp. 1-7). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/MTITS.2019.8883388
    • This conference paper by Consilvio and colleagues focuses on how the introduction of autonomous vehicles (AVs) opens opportunities to optimize urban road networks to create more space for more sustainable forms of “soft mobility” like walking and cycling. They frame this as a network design problem and conduct a case study to show how non-essential nodes in AV road networks can be re-purposed for sustainable and active transport.
  • Caro, R. A. (1974).* The power broker: Robert Moses and the fall of New York. Alfred A. Knopf.
    • This biography recounts the career of Robert Moses – a prominent public official in the urban planning and development of New York City. As an urban developer, Moses played a significant role in shaping the New York metropolitan area and affected many lives. Caro reveals how his planning led to an arid urban landscape full of public housing failures and barriers to humane living, which (among other things) led to his demise. In spite of these concerns, Moses was able to accomplish his ‘ideal’ urban plan which is still felt in New York today. 
  • Dawid, H., & Muehlheusser, G. (2019). Smart products: Liability, investments in product safety, and the timing of market introduction (CESifo Working Paper No. 7673). CESifo Group. https://www.econstor.eu/bitstream/10419/201899/1/cesifo1_wp7673.pdf 
    • In this working paper, Dawid and Muehlheusser develop an economic model to understand how product liability impacts innovation. They demonstrate policy trade-offs related to the pace of innovation and the level of safety that result from placing more stringent liability on to autonomous vehicle (AV) manufacturers. They conclude that safety regulation is a better option for overall social welfare when compared to the expansion of liability regimes.
  • Dietrich, M., & Weisswange, T. H. (2019). Distributive justice as an ethical principle for autonomous vehicle behavior beyond hazard scenarios. Ethics and Information Technology 21(3), 227-239. https://doi.org/10.1007/s10676-019-09504-3
    • In this paper, the Dietrich and Weisswange propose that ethics should be a primary consideration for any algorithmic decision that is made by an autonomous vehicle (AV) rather than only a consideration for crash scenarios. Given that an AV’s actions affect other road users, the authors argue that a framework which mobilizes distributive justice principles is ultimately more desirable than an automated “egoistic decision maker.” 
  • Douma, F. (2004).* Using ITS to better serve diverse populations. Minnesota Department of  Transportation Research Services. https://conservancy.umn.edu/handle/11299/1138
    • This report investigates how intelligent transportation systems (ITS) can serve the needs of populations that are otherwise unaddressed by conventional transportation planning. The report observes the current state of transport planning as centralized around the single car and acknowledges that this mode of transport is insufficient for diverse populations where cars may be inaccessible. The report presents demographic and survey data on those who would benefit most from ITS applications.
  • Epting, S. (2019). Transportation planning for automated vehicles—or automated vehicles for transportation planning? Essays in Philosophy20(2), 189-205. https://doi.org/10.7710/1526-0569.1635
    • This paper considers the trend of transport planning that centers itself around automated vehicles (AVs) rather than incorporating them into existing mobility goals. The author observes that self-driving technology is often perceived as a solution for all urban mobility problems but argues that this view often leads to planning that prioritizes AVs rather than planning that uses AVs as a means to achieve broader transit goals. As argued, transport developers should instead focus on planning that is human-centric and aims at sustainability and transportation justice.
  • Epting, S. (2019). Transportation planning for automated vehicles—or automated vehicles for transportation planning? Essays in Philosophy20(2), 189-205. https://doi.org/10.7710/1526-0569.1635
    • This paper considers the trend of transport planning that centers itself around automated vehicles rather than incorporating them into existing mobility goals. The author observes that self-driving technology is often perceived as a solution for all urban mobility problems, but argues that this view often leads to planning that prioritizes AV’s rather than planning that uses AV’s as a means to achieve broader transit goals. As argued, transport developers should instead focus on planning that is human-centric and aims at sustainability and transportation justice.
  • Ethics Commission on Automated and Connected Driving. (2017).* Automated and connected driving. German Federal Ministry of Transport and Digital Infrastructure. https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.html
    • This is a publication by the Federal Ministry of Transport in Germany, and provides a general overview of ethical and legal problems of automated and connected driving. This report provides twenty guidelines for automated driving and considers ethical and legal policy decisions that must be considered during the programming of autonomous driving software, and how this can be accomplished without displacing the human from the center of AI legal regimes.
  • Evans, K., et al. (2020). Ethical decision making in autonomous vehicles: The AV ethics project. Science and Engineering Ethics, 26, 3285–3312. https://doi.org/10.1007/s11948-020-00272-8 
    • In this article, Evans and colleagues propose ‘Ethical Valence Theory’ for decision making in autonomous vehicles (AVs). Within this framework, they argue that one can quantify and hierarchize individual road users’ moral claims relative to an AV to mitigate unethical outcomes in the case of a crash. The piece describes how different road users hold distinct moral claims to safety (for example, pedestrians and passengers hold different claims) before outlining an “ethical deliberation algorithm” that can make decisions based upon this hierarchy.
  • Faulhaber, A., et al. (2019). Human decisions in moral dilemmas are largely described by utilitarianism: Virtual car driving study provides guidelines for autonomous driving vehicles. Science and Engineering Ethics25(2), 399-418. https://doi.org/10.1007/s11948-018-0020-x
    • This article outlines a study that subjected participants to a variety of trolley dilemmas in simulated driving environments. The study observed that participants generally decided based on some utilitarian principle that spared the greatest amount of harm for all parties. The researchers argue this study and its results can provide a justified basis for mandatory utilitarian regimes in all autonomous vehicles, as opposed to customized ethical settings which could yield greater harms in accident scenarios
  • Himmelreich, J. (2018). Never mind the trolley: The ethics of autonomous vehicles in mundane situations. Ethical Theory and Moral Practice21(3), 669-684. https://doi.org/10.1007/S10677-018-9896-4
    • Countering the centrality of the “Trolley Problem” in autonomous vehicle (AV) ethics, Himmelreich argues that the more banal, operational aspects of AVs are more relevant due to the higher granularity of ethical detail required and the enormous scale of everyday ethical considerations. The author begins with a critique of the Trolley Problem as both exceptional and irrelevant before detailing everyday ethical AV challenges like balancing safety against social outcomes and legal liability.
  • Kalra, N., & Groves, D. G. (2017).* The enemy of good: Estimating the cost of waiting for nearly perfect automated vehicles. Rand Corporation.
    • This book focuses on the risks and rewards of autonomous vehicles and questions how safe autonomous vehicles must be before they are deployed for consumer use. The report uses a RAND model of automated vehicle safety to compare vehicular fatalities when self-driving vehicles are cleared for use at various levels of capability relative to human ability.  The report concludes that waiting for AI technology to improve is never beneficial and leads to higher fatalities and greater human costs. 
  • Keeling, G. (2020). Why trolley problems matter for the ethics of automated vehicles. Science and Engineering Ethics, 26, 293-307. https://doi.org/10.1007/s11948-019-00096-1
    • With this article, Keeling argues in favour of incorporating trolley scenarios into ethical considerations related to autonomous vehicles (AVs). The paper is structured around the author’s refutation of four common arguments against the usefulness of trolley problems: that they are not likely scenarios; that trolley problems ignore salient moral aspects of likely crash scenarios; that trolley scenarios impose a top-down solution onto decision making algorithms that are shaped from the bottom up; and that trolley problems are asking the wrong questions about the moral values that should be programmed into AVs.
  • Millán-Blanquel, L., et al. (2020). Ethical considerations for a decision making system for autonomous vehicles during an inevitable collision. In 2020 28th Mediterranean Conference on Control and Automation (MED) (pp. 514-519). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/MED48518.2020.9183263
    • The authors of this conference paper put forth a proposal to “solve the issue” of ethics in autonomous vehicle (AV) decision-making. They outline a pre-programmed AV system with six settings based upon formal ethical theories.  They apply these six ethical settings to eight different scenarios, providing an overview of how humans and objects are valued under each setting. Through this framework, the authors propose that AV ethics should be chosen by AV users, within the bounds of the law.
  • Millard-Ball, A. (2018). Pedestrians, autonomous vehicles, and cities. Journal of Planning Education and Research38(1), 6-12. https://doi.org/10.1177/0739456X16675674
    • This article considers the interactions between autonomous vehicles and pedestrians in crosswalk yield scenarios. The author argues (as suggested by a model) that the risk-averse nature of autonomous vehicles will confer impunity to pedestrians, which may cause a transformation from automobile-oriented urban neighborhoods to pedestrian-oriented ones. The author notes that with the increased desirability of walking as a form of transportation in pedestrian-oriented cities, the advantages of autonomous driving systems could become questionable.
  • Nyholm, S., & Smids, J. (2018) Automated cars meet human drivers: Responsible human-robot coordination and the ethics of mixed traffic. Ethics Information Technology. https://doi.org/10.1007/s10676-018-9445-9
    • This paper discusses issues of ethics and responsibility that arise from coordination problems in mixed traffic conditions between human and self-driven vehicles. The authors compare human and AI driving patterns to argue that there must be more focus on the ethics of mixed traffic and human-AI interaction. 
  • Papa, E., & Ferreira, A. (2018). Sustainable accessibility and the implementation of automated vehicles: Identifying critical decisions. Urban Science2(1), 5. https://doi.org/10.3390/urbansci2010005
    • This article argues that there are a variety of ways that AV’s can impose negative effects on everyday life which must be heavily scrutinized. The authors argue that AV’s have the potential to seriously aggravate accessibility issues, and identify critical decisions that must be made in order to capitalize on the possible accessibility benefits (rather than costs) yielded by AI. 
  • Rothstein, R. (2017).* The color of law: A forgotten history of how our government segregated America. Liveright Publishing.
    • This book provides an analysis of contemporary racial segregation throughout American neighbourhoods and argues that this segregation is the result of deliberate government policy rather than commonly referenced factors of wealth and societal prejudice. Rothstein argues that these policies have systematically discriminated against Black communities rendering a direct effect on current wealth and education gaps between Black and white Americans. 
  • Rhim, J., et al. (2020). Human moral reasoning types in autonomous vehicle moral dilemma: A cross-cultural comparison of Korea and Canada. Computers in Human Behavior102, 39-56. https://doi.org/10.1016/J.CHB.2019.08.010
    • Rhim and colleagues provide a cultural comparison of how autonomous vehicle (AV) decision-making aligns with different moral values in Korea (as a collectivist culture) and Canada (an individualist culture). Using content from in-depth interviews, the authors identified 32 moral codes and used a k-means cluster analysis to derive 3 moral types. They conclude that the consideration of morality in AV regulation requires attentiveness to cultural sensitivity and pluralism due to differences in the proportion of moral reasoning types among Korean and Canadian study participants. 
  • Ryan, M. (2019). The future of transportation: Ethical, legal, social and economic impacts of self-driving vehicles in the year 2025. Science Engineering Ethics. https://doi.org/10.1007/s11948-019-00130-2
    • This article provides a forward-looking outlook concerning the development of automated vehicles (AV) between 2019 and 2025. The author extrapolates the current trajectory of AV technology and policy development to construct a vision of the likely future in 2025. The paper considers legal, social and economic implications of AV deployment including privacy, liability, data governance and safety. The author intends to show how policymakers’ current actions will affect the development of AV in the future. 
  • SAE International. (2016).* Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. https://www.sae.org/standards/content/j3016_201806/
    • This document explains autonomous driving systems that perform ‘dynamic driving tasks’ and provides a full taxonomy of relevant definitions and categories of automated driving ranging from no automation (level 0) to full automation (level 5). The terms provided are intended to be used across the autonomous driving industry to maintain coherence and consistency when referring to driving systems. 
  • Smith, B. W. (2017).* How governments can promote automated driving. New Mexico Law Review, 47(1), 99-138. http://ssrn.com/abstract=2749375
    • This article recognizes the common desire among governments to accelerate the development and deployment of automated driving technologies in their respective jurisdictions, and provides steps that can be taken by governments to encourage this process. The author argues that governments must do more than pass ‘autonomous driving laws’ and should instead take a nuanced approach that recognizes the various technologies, applications and applicable laws that apply to autonomous vehicles. 
  • Smith, B. W. (2015).* Regulation and the risk of inaction. In M. Maurer, J. Gerdes, B. Lenz & H. Winner (Eds.), Autonomes Fahren (pp. 593-609). Springer.
    • This article considers how risk is allocated in uncertainty and who determines this, in the context of autonomous driving. The author focuses on the role that legislatures, administrative agencies and courts play in developing relevant rules, regulation or verdicts, and proposes eight strategies that can serve as a meta-regulation of these processes in the context of autonomous driving. 
  • Sparrow, R., & Howard, M. (2017). When human beings are like drunk robots: Driverless vehicles, ethics, and the future of transport. Transportation Research Part C: Emerging Technologies80, 206-215. https://doi.org/10.1016/j.trc.2017.04.014
    • This article pushes back against the prevailing narrative that autonomous vehicles will save lives by observing that many automated systems are dependent on human supervision which produces more dangerous outcomes than anticipated. However, once vehicles become fully autonomous the authors argue against the moral permissibility of manual driving. 
  • Taeihagh, A., & Lim, H. S. M. (2019). Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks. Transport reviews39(1), 103-128. https://doi.org/10.1080/01441647.2018.1494640
    • This article assesses the risks of automated vehicles and available solutions for governments to address them. The authors conclude that governments have largely avoided stringent and legally-binding measures in an effort to encourage future AI development. They provide some data and analysis from US, UK and Germany to observe that while these countries have taken some steps toward legislation, most others have not implemented any specific strategy that acknowledges issues presented by AI.
  • Uniform Law Commission. (2019).* Uniform automated operation of vehicles act. https://www.uniformlaws.org/committees/communityhome?CommunityKey=4e70cf8e-a3f4-4c55-9d27-fb3e2ab241d6
    • This is a proposed legislative document that concerns the regulation and operation of autonomous vehicles. The act covers the deployment and licensing process of automated vehicles on public roads, and attempts to adapt existing US vehicle codes to accommodate for this deployment. The act also stresses the need for a legal entity to address issues of vehicle licensing, ownership, liability and responsibility. 
  • United Nations Global Forum for Road Traffic Safety. (2018).* Resolution on the deployment of highly and fully automated vehicles in road traffic. https://undocs.org/pdf?symbol=en/ECE/TRANS/WP.1/2018/4/REV.3
    • This is a UN resolution that is dedicated to road safety and the safe deployment of self-driving technologies on public roads. The resolution is not legally binding but intended to serve as a guide for nations dealing with the implementation of autonomous technologies. It offers recommendations to ensure safe interaction between autonomous and conventional driving technology. 
  • United States Department of Transportation. (2018).* Preparing for the future of transportation: Automated vehicles 3.0. https://www.transportation.gov/av/3
    • This is the third iteration of a report developed by the US Department of Transportation (DOT) which is intended to highlight the DOT’s interest in promoting safe, reliable and cost-effective deployment of automated technologies into various modes of surface transportation. The report includes six principles to guide policy and five strategies for implementation based on the principles. 
  • Wolkenstein, A. (2018). What has the trolley dilemma ever done for us (and what will it do in the future)? On some recent debates about the ethics of self-driving cars. Ethics and Information Technology, 20(3), 163-173. https://doi.org/10.1007/s10676-018-9456-6
    • This article considers how the trolley problem is often cited in literature and public debates related to autonomous vehicles by claiming to provide practical guidance on AI ethics for self-driving cars. Through an analysis of relevant sources, the author argues that although the philosophical considerations bestowed by the trolley problem may be theoretically worthwhile, the trolley problem is ultimately unhelpful in programming and passing legislation for automated driving technologies. 

Chapter 36. The Case for Ethical AI in the Military (Jai Galliott and Jason Scholz)⬆︎

  • Arkin, R. (2009).* Governing lethal behavior in autonomous robots. CRC Press.
    • This article argues in favor of, and presents a framework for, the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system, such that the system adheres to the Laws of War and Rules of Engagement. 
  • Arkin, R. C. (2010). The case for ethical autonomy in unmanned systems. Journal of Military Ethics, 9(4), 332-341.
    • This article appeals to ongoing and foreseen technological advances, and assessments of human abilities as forces of warfare to argue in favor of the ethical autonomy of lethal autonomous unmanned systems. In addition to their capacity for autonomy, the article argues that these systems will potentially be capable of performing more ethically on the battlefield than are human soldiers.
  • Awad, E., et al. (2018).* The moral machine experiment. Nature, 563(7729), 59-64.
    • This article aims to address concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide these machines. The authors utilize the Moral Machine, an online experimental platform, to gather data which is analyzed to come to a recommendation as to how machine decision making should be determined.
  • Enemark, C. (2013). Armed drones and the ethics of war: Military virtue in a post-heroic age. Routledge.
    • This book assesses the ethical implications of using armed unmanned aerial vehicles in contemporary conflicts, by analyzing them in context of ethical principles that are intended to guard against unjust increases in the incidence and lethality of armed conflict. The book weighs evidence that indicates that the use of armed drones is to be welcomed as an ethically superior mode of warfare against the argument that continued and increased use may ultimately do more harm than good.
  • Enemark, C. (2019). Drones, risk, and moral injury. Critical Military Studies, 5(2), 150–167. https://doi.org/10.1080/23337486.2017.1384979
    • This article frames drone operators as moral agents and assesses the possibility, given recent evidence, that drone violence can cause “moral injury” to the operator. This moral injury is said to occur when a drone killing, deemed permissible by others, betrays the operator’s personal standard of right conduct. The article concludes by arguing that if the risk of moral injury is real, it could serve as an additional ethical basis for restraining drone violence.
  • Galliott, J. (2015).* Military robots: Mapping the moral landscape. Ashgate Publishing.
    • This book uses the lens of the rise of drone warfare to explore and analyze the moral, political and social questions that have arisen in the contemporary era of warfare. Some examples of these issues are concerns of who may be legitimately targeted in warfare, the collateral effects of military weaponry and the methods of determining and dealing with violations of the laws of war. 
  • Galliott, J. (2016).* Defending Australia in the digital age: toward full spectrum defence. Defence Studies, 16(2), 157-175.
    • This paper argues that Australia’s defense strategy is incomplete or at least inefficient. The author argues this is the consequence of a crippling geographically focused strategic dichotomy, caused by the armed forces historically having been structured to venture afar as a small part of a large coalition force or, alternatively, to combat small regional threats across land, sea, and air. 
  • Galliott, J. (2017).* The limits of robotic solutions to human challenges in the land domain. Defence Studies, 17(4), 327-345.
    • This article explores the limits of robotic solutions to military problems, encompassing technical limitations and redundancy issues that point to the need to introduce a framework compatible with the adoption of robotics while preserving existing levels of human staffing.
  • Garcia, D. (2018). Lethal artificial intelligence and change: The future of international peace and security. International Studies Review, 20(2), 334–341.
    • This paper argues that the use of artificial intelligence in warfare can destabilize the international system. To cope with such changes, the author argues, states should adopt preventive governance frameworks based upon the precautionary principle of international law. To bolster this suggestion, the author examines twenty-two existing treaties established to control weapons systems that were deemed destabilizing and finds that all of them either prevented further militarization or made weaponization unlawful.
  • Horowitz, M. C. (2016). Public opinion and the politics of the killer robots debate. Research & Politics, 3(1). https://doi.org/10.1177/2053168015627183
    • This article uses survey data to shed light on American public opinion concerning autonomous weapons systems (AWS). Based on the collected data, the article argues that public support for AWS is highly contextual, in contradiction with existing research that suggests widespread opposition. For instance, the data shows that fear of other countries (or non-state actors) developing AWS increases American public support for their own government’s use of the technology significantly.
  • Leben, D. (2018).* Ethics for robots: How to design a moral algorithm. Routledge.
    • In this book, Leben describes and defends a framework for designing and evaluating ethical algorithms that will govern autonomous machines. Furthermore, the book argues that these algorithms should be evaluated by how effectively they accomplish the problem of cooperation among self-interested organisms, and therefore, must be catered to the artificial subjects at hand, rather than being created based to simulate evolved psychological systems.
  • Lewis, D. A., et al. (2016). War-algorithm accountability. Harvard Law School Program on International Law and Armed Conflict. https://pilac.law.harvard.edu/waa
    • In this briefing report, the authors introduce a new concept, “war algorithms.” War algorithms are defined as any algorithm expressed in computer code capable of operating in the context of armed conflict. The authors then argue that–contrary to the more specific concept of autonomous weapons systems (AWS)–war algorithms may fit within the existing regulatory system established by international law.
  • Lin, P., et al. (Eds.). (2017).* Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.
    • This book presents a wide and updated range of contemporary ethical issues facing the field of robotics, utilizing new use-cases for robots and their challenges to build a global representation of the contemporary questions in the field. 
  • Lin, P., et al. (2008).* Autonomous military robotics: Risk, ethics, and design. California Polytechnic State University San Luis Obispo.
    • This paper presents and explores the issues that need to be considered in responsibly introducing advanced technologies into the battlefield and, eventually, into society. It argues for the presumptive case for the use of autonomous military robotics, but then goes on to consider various issues that come with this decision, for example: the need to address risk and ethics in the field, ethical and social issues, both near- and far-term, and recommendations for future work. 
  • Young, K. L., & Carpenter, C. (2018). Does science fiction affect political fact? Yes and no: A survey experiment on “Killer Robots.” International Studies Quarterly, 62(3), 562–576. https://doi.org/10.1093/isq/sqy028
    • This paper explores the effect of popular culture on American attitudes toward autonomous weapons systems (AWS). The authors find that consumption of films with frightening depictions of armed artificial intelligence (AI) is associated with greater opposition to autonomous weapons. Furthermore, this “sci-fi literacy” effect is increased if survey respondents are first “primed” about popular culture–an effect the authors call the “sci-fi geek effect.” 
  • Maas, M. M. (2019). How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. Contemporary Security Policy, 40(3), 285–311. https://doi.org/10.1080/13523260.2019.1576464
    • In this article, the author draws on lessons learned from arms control regimes in nuclear weapons to suggest how similar techniques may work for military artificial intelligence (AI). The author uses these parallels to argue that an “AI arms race” is not inevitable and can be managed directly through engagement with domestic political coalitions or indirectly by shaping norms top-down (through international regimes) or bottom-up (through epistemic communities).
  • McMahan, J. (2013). Killing by remote control: The ethics of an unmanned military. Oxford University Press.
    • This text explores the ethical permissibility of the use of unmanned mediated mechanisms in warfare. It includes discussions of broader issues such as the just war tradition and the ethics of war, as well as more specific issues surrounding the use of drones, such as what are known as “targeted killing” by the United States.
  • Nilsson, N. J. (2009). The quest for artificial intelligence. Cambridge University Press.
    • This book traces the history and development of AI, as well as explaining its contemporary uses and issues surrounding its implementation. 
  • Scholz, J., & Galliott, J. (2018).* Artificial intelligence in weapons: The moral imperative for minimally-just autonomy. US Air Force Journal of Indo-Pacific Affairs, 1(2), 57-67.
    • This article argues that to allow military power to be lawful and morally just future autonomous artificial intelligence (AI) systems must not commit humanitarian errors. Therefore, the authors propose a preventative form of minimally-just autonomy using artificial intelligence (MinAI). This would avert attacks on protected symbols, sites, and recognize signals of surrender.
  • Sparrow, R. (2009).* Building a better WarBot: Ethical issues in the design of unmanned systems for military applications. Science and Engineering Ethics, 15(2), 169-187.
    • This article explores how designers of unmanned military systems must consider ethical, as well as operational, requirements and limits when developing such systems. The author presents two groups of such ethical issues, Building Safe Systems and Designing for the Law of Armed Conflict.
  • Sullins, J. P. (2006). When is a robot a moral agent. Machine Ethics, 151-160, 
    • This paper argues that in certain circumstances robots can be seen as real moral agents, under specific conditions. The robot must be first, significantly autonomous from any programmers or operators of the machine, second the robot’s behavior must have an ‘intention’, and finally, the robot must behave in a way that shows an understanding of responsibility to some other moral agent. 
  • Umbrello, S., et al. (2020). The future of war: Could lethal autonomous weapons make conflict more ethical? AI & Society, 35(1), 273–282. https://doi.org/10.1007/s00146-019-00879-x
    • This paper weighs the arguments for and against the use of Lethal Autonomous Weapons (LAWs) through the lens of achieving more ethical warfare. The authors contend that the relatively low cost, the potential for “moral programming,” and the ability to remove human combatants from the line of fire constitute strong reasons for pursuing LAWs. However, the authors note several caveats; LAWs must have targeting and judgment systems equal to or superior to humans and must embody moral programs that all parties agree upon.

Chapter 37. The Ethics of AI in Biomedical Research, Patient Care, and Public Health (Alessandro Blasimme and Effy Vayena)⬆︎

Biomedical Research

  • Blasimme, A., & Vayena, E. (2016). “Tailored-to-You”: Public Engagement and the Political Legitimation of Precision Medicine. Perspectives in Biology and Medicine, 59(2), 172-188.
    • This article outlines a detailed history of personalized medicine in its sociotechnical and legislative context in the United States, with a particular focus on the 2015 federal Precision Medicine Initiative. The authors emphasize the interplay between scientific and social factors, especially the importance of a “participatory ethos” and public engagement in building political support for innovative biomedical paradigms.   
  • Buruk, B., et al. (2020). A critical perspective on guidelines for responsible and trustworthy artificial intelligence. Medicine, Health Care, and Philosophy, 23, 387–399. https://doi.org/10.1007/s11019-020-09948-1 
    • This paper analyzes three sets of ethical guidelines for artificial intelligence and deep learning: The Montréal Declaration for Responsible Development of Artificial Intelligence, the Ethics Guidelines for Trustworthy AI, and the Asilomar Artificial Intelligence Principles. The paper then addresses whether those guidelines are sufficient given the ethical intricacies stemming from the introduction of deep learning in medicine. The authors argue that the guidelines do not make suggestions for ethical dilemmas occurring in everyday life. 
  • Ferryman, K., & Pitcan, M. (2018). Fairness in precision medicine. Data & Society. https://kennisopenbaarbestuur.nl/media/257243/datasociety_fairness_in_precision_medicine_feb2018.pdf
    • This is a qualitative empirical study on the views of stakeholders engaged in precision medicine on its risks of biases and promises for the future. The study concludes that these stakeholders are both hopeful for the promises held by precision medicine and yet concerned about the potential for bias. 
  • Geneviève, L. D., et al. (2020). Structural racism in precision medicine: Leaving no one behind. BMC Medical Ethics21(1), 1-13.
    • This paper examines precision medicine through the lenses of structural racism and equity. The authors examine how systemic racism can impact the behavior of precision medicine through impacts on the initial data generation processes, the data analytical processes, and the final implementation of models. They warn against the possibility for machine learning technologies to exacerbate these structural problems and offer a range of potential solutions at each step in the precision medicine process. 
  • Hollister, B., & Bonham, V. L. (2018). Should electronic health record-derived social and behavioral data be used in precision medicine research? AMA Journal of Ethics20(9), 873-880.
    • This article explores the ethical and practical issues surrounding the inclusion of social and behavioral information from electronic health records in in precision medicine research. The authors argue that this data is often inconsistently collected and of low quality, and that the sensitive nature of this data presents a significant risk of patient harm if it is misused. 
  • Ienca, M., et al. (2018).* Considerations for ethics review of big data health research: A scoping review. PloS one, 13(10). https://doi.org/10.1371/journal.pone.0204937
    • The methodological novelty and computational complexity of big data health research raises novel challenges for ethics review. This paper reviews the literature to identify and map the major challenges of health-related big data for Ethics Review Committees. The findings suggest that while big data trends in biomedicine hold the potential for advancing clinical research, improving prevention and optimizing healthcare delivery, several epistemic, scientific and normative challenges need careful consideration.
  • Landry, L. G., et al. (2018).* Lack of diversity in genomic databases is a barrier to translating precision medicine research into practice. Health Affairs, 37(5), 780-785.
    • Precision medicine often uses molecular biomarkers to assess patients’ prognosis, and therapeutic response more precisely. This paper examines which populations were included in studies using two public genomic databases, and found significantly fewer studies of African, Latin American, and Asian ancestral populations compared to European populations. While the number of genomic research studies that include non-European populations is improving, the overall numbers are still low, representing potential for inequities in precision medicine applications.
  • Park, S. H., et al. (2019). Ethical challenges regarding artificial intelligence in medicine from the perspective of scientific editing and peer review. Science Editing. https://doi.org/10.6087/kcse.164
    • This review article highlights several aspects of research studies on artificial intelligence (AI) in medicine that require additional transparency and explain why additional transparency is needed. Transparency regarding training data, test data and results, interpretation of study results, and the sharing of algorithms and data are major areas for guaranteeing ethical standards in AI research.
  • Vayena, E., & Blasimme, A. (2017). Biomedical big data: new models of control over access, use and governance. Journal of Bioethical Inquiry, 14(4), 501-513.
    • This article challenges the notion that the collection of biomedical big necessitates a loss of individual control. Rather it proposes three approaches to empowering the individual through: (1) data portability rights, (2) new mechanisms of informed consent, and (3) new schemes of participatory governance.
  • Vayena, E., & Blasimme, A. (2018).* Health research with big data: time for systemic oversight. The Journal of Law, Medicine & Ethics, 46(1), 119-129.
    • This article proposes a new paradigm for the ethical oversight of biomedical research in alignment with the ubiquity of big data as opposed to suggesting updates and fixes for existing models. This paradigm, systemic oversight, is based on six core features: (1) adaptivity, (2) flexibility, (3) monitoring, (4) responsiveness, (5) reflexivity, and (6) inclusiveness.
  • Vollmer, S., et al. (2020). Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ, 368. https://doi.org/10.1136/bmj.l6927
    • Structured around a series of twenty “critical questions” to be asked during the development process, this article explores issues of transparency, replicability, ethics, and effectiveness in the implementation of AI in clinical medicine. The authors emphasize the complex sociotechnical context into which these algorithms are implemented and discuss necessary requirements for AI to be rigorously considered effective in clinical practice. 
  • Wiens, J., et al. (2019). Do no harm: A roadmap for responsible machine learning for health care. Nature Medicine, 25(9), 1337–1340. https://doi.org/10.1038/s41591-019-0548-6
    • This article engages with the issue of responsible machine learning in healthcare from the perspective of interdisciplinary model development and deployment teams. On the development side, the authors outline concerns related to selecting the right problems, developing clinically useful solutions, considering the proximal and distal ethical implications of such solutions, and evaluating the resulting models in rigorous and consistent ways. On the implementation side, they outline issues related to deployment, marketing, and results-reporting for these models. 

Clinical Medicine

  • Arnold, M. H. (2021). Teasing out artificial intelligence in medicine: An ethical critique of artificial intelligence and machine learning in medicine. Bioethical Inquiry, 18, 121–139. https://dx.doi.org/10.1007/s11673-020-10080-1 
    • This paper explores the ethical underpinnings of the introduction of artificial intelligence in medicine. It argues that the use of artificial intelligence in medicine will necessarily impact the role of physicians. Because of this, health practitioners should start engaging with the tensions between artificial intelligence and medical ethical principles (beneficence, autonomy, and justice), in order for them to understand the limits as well as the promises of artificial intelligence for their practice. 
  • Bjerring, J. C., & Busch, J. (2020). Artificial intelligence and patient-centered decision-making. Philosophy & Technology. https://dx.doi.org/10.1007/s13347-019-00391-6 
    • This paper argues that the opacity of some artificial intelligence algorithms is not conducive with informed consent in medical decision-making. In particular, the authors claim that this type of “black-box medicine” is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient.
  • Blasimme, A., & Vayena, E. (2016). Becoming partners, retaining autonomy: ethical considerations on the development of precision medicine. BMC Medical Ethics, 17(1), 67.
    • This article explores the challenge of engaging patients and their perspectives in the precision medicine clinical research process. The authors explore the normative construction of research participation and partnership, as well as tensions between individual and collective interests. They advocate for the concept of “respect for autonomous agents” (as opposed to autonomous action or choice) as a potential mechanism for resolving these ethical tensions. 
  • Blasimme, A., et al. (2019). Big data, precision medicine and private insurance: a delicate balancing act. Big Data & Society, 6(1). https://doi.org/10.1177/2053951719830111
    • Using national precision medicine initiatives as a case study, this article explores the tension between private insurers leveraging repositories of genetic and phenotypic data for economic gain and the utility of these databases as a public, scientific resource. Although the authors admit that information asymmetry between insurance companies and their policy-holders still leads to risks in reduced research participation, adverse selection, and discrimination, they argue that a governance model underpinned by trustworthiness, openness, and evidence can balance these competing interests.
  • Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group. (2019). Canadian Association of Radiologists white paper on ethical and legal issues related to artificial intelligence in radiology. Canadian Association of Radiologists’ Journal, 70(2), 107-118.
    • Radiology is positioned to lead development and implementation of AI algorithms. This white paper from the Canadian Association of Radiologists provides a framework for study of the legal and ethical issues related to AI in medical imaging, including patient data (privacy, confidentiality, ownership, and sharing), algorithms (levels of autonomy, liability, and jurisprudence); practice (best practices and current legal framework), and finally, opportunities in AI from the perspective of a universal health care system.
  • Challen, R., et al. (2019).* Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231-237.
    • This paper outlines a set of short-term and medium-term clinical safety issues raised by machine learning enabled decision-making software. This framework is supported by a set of quality control questions that are designed to help clinical safety professionals and those involved in developing ML systems to identify areas of concern. The authors encourage rigorous testing of new ML systems through randomized control testing, and by comparing to existing practices.
  • Char, D. S., et al. (2018). Implementing machine learning in health care—Addressing ethical challenges. The New England Journal of Medicine378(11), 981-983.
    • This article discusses ethical challenges in the clinical implementation of machine learning systems. In addition to more “straightforward” ethical challenges such as bias and discrimination, the authors discuss “less obvious” risks, such as algorithms being incentivized toward high-profit care, providing excessive legitimacy to medically uncertain decisions, or undermining the clinical experience of physicians. They outline a call for reshaping both medical education and codes of medical ethics in light of these concerns. 
  • Chen, I. Y., et al. (2020). Treating health disparities with artificial intelligence. Nature Medicine26(1), 16-17.
    • This article argues that while substantial concerns exist about algorithms amplifying bias in medicine, algorithms may also play an important role in identifying and correcting disparities. The authors advocate for understandings of the ethics of AI in healthcare to extend beyond the question of algorithmic fairness, and toward better consideration of the systemic and socioeconomic context of health disparity. 
  • Chin-Yee, B., & Upshur, R. (2019). Three problems with big data and artificial intelligence in medicine. Perspectives in Biology and Medicine, 62(2), 237-256.
    • This paper engages with three important philosophical challenges facing “big data” and artificial intelligence in medicine. The authors outline an epistemological-ontological challenge related to the theory laden-ness of big data and measurement, an epistemological-logical challenge related to inherent limits of algorithms, and a phenomenological challenge related to irreducibility of human experience to quantitative data. They argue for the importance of the artificial intelligence in medicine movement engaging with its philosophical foundations. 
  • de Miguel Beriain, I. (2020). Should we have a right to refuse diagnostics and treatment planning by artificial intelligence? Medical Health Care and Philosophy, 23, 247–252. https://dx.doi.org/10.1007/s11019-020-09939-2 
    • This paper is a reply to Ploug & Holm (2020). It argues that patients should have the right to refuse diagnostics and treatments using artificial intelligence, but for different reasons than those supported by Ploug & Holm (2020). This paper presents the following arguments: first, the right to refuse such treatments and diagnostics is justified by virtue of social pluralism and individual autonomy. Second, this right should be limited under three circumstances: (1) where a physician would bring harm to their patient by providing the right to refuse, (2) where it is too expensive to give the right to refuse, (3) where the application of this right has harmful consequences for other patients. 
  • Di Nucci, E. (2019). Should we be afraid of medical AI? Journal of Medical Ethics, 45(8), 556-558
    • This paper argues against ideas that AI represents a threat to patient autonomy. The paper states these ideas often conflate machine learning with AI, miss machine learning’s potential for personalized medicine through big data, and fail to distinguish between evidence-based advice and decision-making within healthcare. Which tasks machine learning performs within healthcare is a crucial question, but care must be taken in distinguishing between the different systems and different delegated tasks.
  • Evans, E. L., & Whicher, D. (2018). What should oversight of clinical decision support systems look like? AMA Journal of Ethics20(9), 857-863.
    • This article engages with the use of clinical decision support systems in medicine, arguing that such systems should be subject to ethical and regulatory oversight above and beyond that of normal clinical practice. The authors outline a framework for the development and use of these systems with an emphasis on articulating proper conditions for use, including processes for monitoring data quality and algorithm performance, and protecting patient data. 
  • Ferretti, A., et al. (2018). Machine learning in medicine: Opening the new data protection black box. European Data Protection Law Review, 4(3), 320-332. https://doi.org/10.21552/edpl/2018/3/10
    • Certain approaches to artificial intelligence, notably deep learning, have drawn criticisms due to their relative inscrutability to human understanding (the “black box” metaphor). This article examines how the black box opacity of machine learning systems in medicine can be categorized in three forms: (1) lack of disclosure on if automated decision-making is taking place, (2) epistemic opacity on how an AI system arrives at a specific outcome, and (3) explanatory opacity on why an AI system provides a specific outcome. Moreover, this article takes a solution-driven approach through discussing how each of the types of opacity identified can be addressed through the General Data Protection Regulation.
  • Ficuciello, F., et al (2019). Autonomy in surgical robots and its meaningful human control. Paladyn, Journal of Behavioral Robotics10(1), 30-43.
    • Focusing on the lens of “Meaningful Human Control” (a term extended from autonomous weapons literature), this paper engages with ethical issues arising from increasing levels of autonomy in surgical robots. The authors review the potential for robotic assistance in minimally invasive surgery and microsurgery and discuss a theoretical framework for levels of surgical robot autonomy based around several levels of “Meaningful Human Control”, each with different burdens of human responsibility and oversight. 
  • Fiske, A., et al. (2019). Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research, 21(5). https://doi.org/10.2196/13216
    • This paper assesses the ethical and social implications of translating AI applications into mental health care across the fields of Psychiatry, Psychology and Psychotherapy. After a literature search, the paper finds that that AI is a promising approach across the field of mental health; however, further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental health care.
  • Gerke, S., et al. (2020). Ethical and legal aspects of ambient intelligence in hospitals. JAMA, 323(7), 601-602.
    • Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces (e.g. video capture to monitor for hand hygiene, patient movements, etc.), and of the use of that information to assist healthcare workers in delivering quality care. This commentary discusses potential issues these practices raise around patient privacy and reidentification risk, consent, and liability. 
  • Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46, 205–211. http://dx.doi.org/10.1136/medethics-2019-105586 
    • This article argues that the use of machine learning algorithms in healthcare settings comes with trade-offs at the epistemic and normative level. Drawing on social epistemology and the literature on moral responsibility, the authors argue that the opacity of algorithms notably challenges the epistemic authority of health practitioners and could lead to vices such as paternalism and gullibility. 
  • He, J., et al. (2019). The practical implementation of artificial intelligence technologies in medicine. Nature Medicine25(1), 30-36.
    • This article explores practical issues that exist regarding the implementation of AI in clinical workflows, including data sharing difficulties, privacy issues, transparency problems, and concerns for patient safety. The authors argue that these practical issues are global in scope, and engage in a comprehensive comparative discussion of the medical AI regulatory environments in the United States, Europe, and China.
  • Ho, C., et al. (2019). Governance of automated image analysis and artificial intelligence analytics in healthcare. Clinical Radiology, 74(5), 329-337.
    • This paper discusses the nature of AI governance in biomedicine along with its limitations. It argues that radiologists must assume a more active role in propelling medicine into the digital age, including inquiring into the clinical and social value of AI, alleviating deficiencies in their technical knowledge to facilitate ethical evaluation, supporting the recognition and removal of biases, engaging the “black box” obstacle, and brokering a new social contract on informational use and security.
  • Karches, K. E. (2018). Against the iDoctor: why artificial intelligence should not replace physician judgment. Theoretical Medicine and Bioethics, 39, 91–110. https://dx.doi.org/10.1007/s11017-018-9442-3
    • This paper argues that artificial intelligence is not suited for clinical practice. Drawing on the works of Martin Heidegger and Hubert Dreyfus, the author argues that medical algorithms cannot be adapted to individual patients’ needs and thus cannot produce efficient clinical care. 
  • Kiener, M. (2020). Artificial intelligence in medicine and the disclosure of risks. AI & Society. https://dx.doi.org/10.1007/s00146-020-01085-w 
    • This paper argues that the risks of employing opaque algorithms in medicine should be disclosed to patients medicine by their health practitioners. The most notable risks are those created by cyberattacks, systematic bias within the data used to build the algorithm, and a potential incongruence between the assumptions made by an algorithm and an individual patient’s background situation.  The author argues that under certain conditions, these risks must be disclosed in order for the physician to acquire informed consent and meet their duty to warn patients about potentially harmful consequences.
  • Lamanna, C., & Byrne, L. (2018). Should artificial intelligence augment medical decision-making? The case for an autonomy algorithm. AMA Journal of Ethics20(9), 902-910.
    • The authors of this article put forward the concept of an “autonomy algorithm”, which might be used to integrate data from social media and electronic health records in order to estimate the likelihood that an incapable patient would have consented to a particular course of treatment. They explore ethical and practical issues in the construction and implementation of such an algorithm, and ultimately argue that it would likely be more reliable and less liable to bias than existing substitute decision-making methods. 
  • London, J. A. (2019). Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1), 15-21. http://dx.doi.org/10.1002/hast.973
    • The article questions the position that opaque or unexplainable machine learning systems should be avoided in medicine. The author argues that imposing an explainability requirement on algorithms is inappropriate in the medical context. In the context of medicine, empirical findings are often relied upon without necessarily accepting the theories that aim to explain the phenomena. For instance, we can evaluate the efficacy of a medical intervention without being able to understand or explain the mechanisms behind why such an intervention works. The focus in medicine should therefore on the accuracy and reliability of machine learning systems instead of its explainability. 
  • Luxton, D. D. (2014). Recommendations for the ethical use and design of artificial intelligent care providers. Artificial Intelligence in Medicine, 62(1), 1-10.
    • This paper identifies and reviews ethical issues associated with artificial intelligent care providers in mental health care and other helping professions. It finds that existing ethics codes and practice guidelines do not presently consider the current or the future use of interactive artificial intelligent agents to assist and to potentially replace mental health care professionals. Specific recommendations are made for the development of ethical codes, guidelines, and the design of these systems.
  • Martinez-Martin, N., et al. (2018). Is It Ethical to Use Prognostic Estimates from Machine Learning to Treat Psychosis? AMA Journal of Ethics20(9), 804-811.
    • Building on the case study of a recent machine learning model for predicting prognosis for patients with psychosis, this article engages with the ethics of AI in psychiatry specifically, as well as the ethics of implementing innovation in clinical medicine more broadly. In particular, the authors examine the burdens that are placed upon physicians in understanding and engaging with novel technologies, and the challenges with communicating risks sufficiently to enable informed consent. 
  • McDougall, R. J. (2019). Computer knows best? The need for value-flexibility in medical AI. Journal of Medical Ethics, 45(3), 156–160. https://doi.org/10.1136/medethics-2018-105118
    • Focusing on the case study of IBM’s “Watson for Oncology”, this paper engages with issues related to shared decision-making in medical AI. The author argues that the use of fixed and covert value judgments underlying AI systems risks excluding patient perspectives and increasing medical paternalism. Conversely, she argues that AI systems can be “value-flexible” if developed to explicitly incorporate patient values and perspectives, and in doing so may remedy existing challenges in shared decision-making.  
  • Nebeker, C., et al. (2019). Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Medicine, 17(1), 137. https://doi.org/10.1186/s12916-019-1377-7
    • Placing a particular focus on direct-to-consumer digital therapeutics, this article examines the current ethical and regulatory environment for digital health. The authors describe the current situation as a “wild west” with little regulation and identify gaps and opportunities in terms of building interdisciplinary collaboration, improving digital literacy, and developing ethical standards. They conclude by summarizing several initiatives already underway to address these gaps  
  • Nundy, S., et al. (2019). Promoting trust between patients and physicians in the era of artificial intelligence. Jama, 322(6), 497-498.
    • This paper discusses how AI will affect trust between physicians and patients. The three components of trust are defined as competency, motive and transparency, and it is explored whether AI enabled health applications may impact each of these domains. The paper concludes that by reaffirming the foundational importance of trust to health outcomes and engaging in deliberate system transformation, the benefits of AI can be realized while strengthening patient-physician relationships.
  • Obermeyer, Z., et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
    • This paper engages in a quantitative analysis and discussion of racial bias in a commercial algorithm for stratifying the risk of patients with chronic disease. The authors quantitatively uncover that the algorithm unfairly classifies black patients as requiring less care than white patients of equivalent acuity, and explore further to determine that this disparity arises from using cost of care as a surrogate for health needs, and failing to consider structural disparity. They offer discussion of measures that can be taken to avoid similar problems. 
  • Ostherr, K. (2020). Artificial Intelligence and Medical Humanities. Journal of Medical Humanities. https://dx.doi.org/10.1007/s10912-020-09636-4 
    • This paper gives an overview of the different issues that have been voiced regarding artificial intelligence in medicine in relation to medical humanities. It focuses on a dozen issues including the definition of “medical” and “health” app, the social determinants of health, narrative medicine, the place of technology within medical care, the question of data privacy and trust, flaws datasets and bias, racism, and the rhetoric of humanism. 
  • O’Sullivan, S., et al. (2019). Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. The International Journal of Medical Robotics and Computer Assisted Surgery, 15(1). https://doi.org/10.1002/rcs.1968
    • This paper discusses autonomous robotic surgery with a particular focus on ethics, regulation and legal aspects (such as civil law, international law, tort law, liability, medical malpractice, privacy and product/device legislation, among other aspects). It explores responsibility for AI and autonomous surgical robots using the categories accountability, liability, and culpability, finding culpability as being the category with the least legal clarity.
  • Ploug, T., & Holm, S. (2020). The right to refuse diagnostics and treatment planning by artificial intelligence. Medicine, Health Care, and Philosophy, 23, 107–114. https://dx.doi.org/10.1007/s11019-019-09912-8 
    • This paper argues that patients should have the right to refuse artificial intelligence medical treatments and diagnostics. Three arguments are presented to defend this thesis: (1) physicians should respect patients’ personal interests, (2) the opacity of algorithms and their potential for bias justify an option to opt out, (3) patients may have legitimate concerns about the social impact of using artificial intelligence in medicine. 
  • Price, W. (2015). Black-box medicine. Harvard Journal of Law & Technology, 28(2), 419-468.
    • Written from a primarily legal and regulatory perspective, this article engages with the issue of “black box” technologies in precision medicine that are unable to provide a satisfactory explanation of the decisions that are outputted. The author discusses contemporary “Big Data” technology in medicine from practical and theoretical perspectives. He outlines several hurdles to development of this technology and a range of policy challenges including issues of incentives, privacy, regulation, and commercialization. 
  • Price, W. N., et al. (2019). Potential liability for physicians using artificial intelligence. JAMA, 322(18), 1765-1766.
    • As AI applications enter clinical practice, physicians must grapple with issues of liability when determining how and when to follow (or not follow) the recommendations of these applications. In this article, legal scholars draw upon principles of tort law to discuss when a physician could be held liable for malpractice. The core argument of this paper, the need to analyze whether an AI recommendation is accurate and follows standard-of-care, has been synthesized by the authors in a tabular format.
  • Rampton, V., et al. (2020). Implications of artificial intelligence for medical education. The Lancet Digital Health, 2(3), 111-112. https://doi.org/10.1016/S2589-7500(20)30023-6
    • As AI applications advance in medicine, there is a need to educate health professionals about these applications and their ethical implications. However, the path forward to do so remains unclear. In this article, the authors demonstrate how a popular educational framework for physicians, the Canadian Medical Education Directives for Specialists, can be modified to reflect the impact AI is having and will continue to have in medical practice and in healthcare more broadly.
  • Reddy, S., et al. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association, 27(3), 491-497.
    • Concern has been expressed about the ethical and regulatory aspects of the application of AI in health care. While there has been extensive discussion about the ethics of AI in health care, there has been little dialogue as to how to practically address these concerns. This article proposes a governance model addresses the ethical and regulatory issues that arise out of the application of AI in health care.
  • Sadegh-Zadeh, K. (2015). Medical artificial intelligence. In Handbook of Analytic Philosophy of Medicine (2nd ed., pp. 697-733). Springer. 
    • This is the sixth section of the handbook. It features three chapters and gives an introduction on the topic of artificial intelligence in medicine. Chapter 19, “Medical Decision-Making” is an introduction on the history and the philosophy of scientific medical decision-making, the predecessor of medical artificial intelligence. Chapter 20, “Clinical Decision Support Systems” covers the history of the first medical decision systems invented in the cradle of artificial intelligence, the Stanford Heuristic Programming Project. Chapter 21, “Artificial Intelligence in Medicine?” is a philosophical analysis of what “intelligence” and “artificial” mean in relation to medical decision systems. The author asks whether artificial intelligence is indeed possible in medicine. 
  • Schiff, D., & Borenstein, J. (2019). How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA Journal of Ethics, 21(2), 138-145.
    • This article uses a hypothetical patient scenario to illustrate the difficulties faced when articulating the use of AI in patient care. They focus on: (1) informed consent, (2) patient perceptions of AI, and (3) liability when responsibility is distributed among “many hands”. For readers new to the area of medical decision-making, the case-based approach the authors have taken will be an engaging introduction to the most common pedagogy of medical education.
  • Smallman, M. (2019).* Policies designed for drugs won’t work for AI. Nature, 567(7746), 7. https://doi.org/10.1038/d41586-019-00737-2
    • This paper comments on the 2019 code of conduct for artificial-intelligence systems in health care by the UK government. The principles, laid out by the Department of Health and Social Care, aim to protect patient data and ensure safe data-driven technologies. The author argues however that the code fails to appreciate the potential to introduce and worsen inequities, and states the importance of developing a framework that considers and anticipates the social consequences of AI.
  • Tene, O., & Polonetsky, J. (2011).* Privacy in the age of big data: A time for big decisions. Stanford Law Review Online, 64, 63-69.
    • Big Data creates enormous value for the global economy, driving innovation, productivity, efficiency, and growth. This paper discusses privacy concerns related to big data applications, and suggests that in order to balance beneficial uses of data and the protection of individual privacy, policymakers must address some of the most fundamental concepts of privacy law, including the definition of “personally identifiable information,” the role of consent, and the principles of purpose limitation and data minimization.
  • Topol, E. J. (2019).* High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56.
    • This review article provides an overview of the impact of AI in medicine at the levels of clinicians, health systems, and patients. It also reviews the current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications. The results reveal that over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but the potential impact on the patient–doctor relationship remains unknown.
  • Vayena, E., Blasimme, A., & Cohen, I. G. (2018).* Machine learning in medicine: addressing ethical challenges. PLoS Medicine, 15(11). https://doi.org/10.1371/journal.pmed.1002689
    • In this perspective, the authors outline a four-stage approach to promoting patient trust and provider adoption: (1) alignment with data protection requirements, (2) minimizing the effects of bias, (3) effective regulation, and (4) achieving transparency. Their approach is grounded by referencing the disparate views held on artificial intelligence in healthcare by the general adult population, medical students, and healthcare decision-makers ascertained through recently conducted surveys.
  • Vellido, A. (2019). Societal issues concerning the application of artificial intelligence in medicine. Kidney Diseases, 5(1), 11-17.
    • This paper reflects on a number of specific issues affecting the use of AI and ML in medicine, such as fairness, privacy and anonymity, explain-ability and interpretability, but also some broader societal issues, such as ethics and legislation. It additionally argues that AI models must be designed from a human-centered perspective, incorporating human-relevant requirements and constraints.
  • Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What this computer needs is a physician: humanism and artificial intelligence. Jama, 319(1), 19-20.
    • This commentary highlights that while AI in medicine will lead to improved accuracy and efficiency, there is concern that the introduction of new tools may adversely impact physicians and lead to burnout, similar to electronic medical records. The authors state that we must aim for partnerships in which machines predict and perform tasks such as documentation, and physicians explains to patients and decides on action, bringing in the societal, clinical, and personal context. AI can enable physicians to spend more time caring for patients, actually improving the physician’s quality of work and the patient-physician relationship.
  • Wachter, R. M., & Cassel, C. K. (2020). Sharing health care data with digital giants: overcoming obstacles and reaping benefits while protecting patients. JAMA, 323(6), 507-508.
    • In response to the steady stream of news updates around the entry and involvement of the major technology companies (e.g. Google, Apple, Amazon) into healthcare, this commentary proposes ideals for a collaborative path forward. It emphasizes transparency (especially around financial disclosures and conflicts of interest), direct consultation with patients/patient advocacy groups, and data security.
  • Wachter, S., et al. (2017).* Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99.
    • The ‘right to explanation’ in the EU’s Global Data Protection Regulation (GDPR) is seen as a mechanism to enhance the accountability and transparency of AI enabled decision-making. However, this paper shows that ambiguity and imprecise language in these regulations do not create well-defined rights and safeguards against automated decision-making. The paper proposes a number of legislative and policy steps to improve the transparency and accountability of automated decision-making.
  • van Wynsberghe, A. (2013). Designing Robots for Care: Care Centered Value-Sensitive Design. Science and Engineering Ethics, 19(2), 407–433. https://doi.org/10.1007/s11948-011-9343-6
    • This article discusses a value-sensitive design approach as applied to the creation of care robots created to fill a role analogous to that of a human nurse. After outlining foundational theoretical understandings of values, care ethics, and care practices, the author synthesizes a context-specific framework for considering these issues in robot design. She grounds this framework in the case study of already-implemented autonomous robots for lifting patients in the care home environment. 
  • Yu, K. H., et al. (2018).* Artificial intelligence in healthcare. Nature Biomedical Engineering, 2(10), 719-731.
    • With recent progress in digitized data acquisition, machine learning and computing infrastructure, AI applications are expanding into areas that were previously thought to be only the domain of human experts. This review article outlines recent breakthroughs in AI technologies and their biomedical applications, identifies the challenges for further progress in medical AI, and summarizes the economic, legal and social implications of AI in healthcare.

Public Health and Global Health

  • Davies, S. E. (2019). Artificial Intelligence in Global Health. Ethics & International Affairs33(2), 181-192.
    • Focusing largely on the topic of infectious disease, this paper explores the potential and limitations of artificial intelligence in the context of global health. The author contends that while AI may be effective in guiding responses to outbreak events, it presents substantial ethical risks related to exacerbating healthcare quality disparities, diverting funding from otherwise-necessary structural improvements, and enabling human rights abuses under the guise of containment.  
  • Ienca, M., & Vayena, E. (2020). On the responsible use of digital data to tackle the COVID-19 pandemic. Nature Medicine. https://doi.org/10.1038/s41591-020-0832-5
    • This article argues that as vast amounts of digital data are being used to combat the COVID-19 pandemic, the uptake and maintenance of responsible data-collection and data-processing standards at a global scale is also vital. As data from mobile phones and internet-connected devices is being fed into pandemic prediction and surveillance efforts, the authors emphasize not only the duty to protect the public’s right to life, but also their rights to privacy and confidentiality. If governments and data trustees fail to do so, public mistrust could jeopardize the efficacy of even the most well-intentioned measures to reduce disease burden.
  • Kostkova, P. (2018). Disease surveillance data sharing for public health: the next ethical frontiers. Life Sciences, Society and Policy, 14(1). https://doi.org/10.1186/s40504-018-0078-x
    • This article identifies three core ethical challenges with the use of digital data in public health: (1) data sharing across risk assessment tools, (2) the use of population-level data without compromising privacy, and (3) regulating how technology companies manipulate user data. The article places special emphasis on legislation and regulatory frameworks from the European Union.
  • Luxtona, D. D. (2020). Ethical implications of conversational agents in global public health. Bulletin of the World Health Organization, 98(4), 285-287.
    • Conversational agents, colloquially known as “chatbots”, could help address disparities in access to mental health services or health services more generally in times of emergency (e.g. a natural disaster, pandemic, etc.). This article outlines core ethical issues of conversational agents to be cognizant of:  risk of bias, risk of harm, privacy, and inequitable access. It concludes by alluding to the World Health Organization’s potential role in this space through the creation of a “cooperative international working group” to make recommendations on the design and deployment of conversational agents and other artificially intelligent tools.
  • Mittelstadt, B., et al. (2018). Is there a duty to participate in digital epidemiology? Life Sciences, Society and Policy, 14, 9. https://doi.org/10.1186/s40504-018-0074-1
    • This article explores the notion of a duty to participate in digital epidemiology, acknowledging that there are different risks to participants present than in traditional biomedical research. The authors outline eight justificatory conditions for participation in digital epidemiology that should be reflected upon “on a case-by-case basis with due consideration of local interests and risks”. Notably, the authors demonstrate how these justificatory conditions can be used in-practice in three case studies involving infectious disease surveillance, HIV screening, and detecting notifiable diseases in livestock.
  • Murphy, K., et al. (2021). Artificial intelligence for good health: A scoping review of the ethics literature. BMC Medical Ethics, 22. https://doi.org/10.1186/s12910-021-00577-8 
    • This paper is an empirical review of the literature on the ethics of artificial intelligence in medicine. Most of the 103 papers included in the review focused on the ethics of artificial intelligence in healthcare, including robots, diagnostics and precision medicine. The review points to a gap in the literature around the ethics of artificial intelligence in public health, as well as the ethics of global health. Common ethical concerns addressed by the literature were privacy, trust, accountability, responsibility, and bias. 
  • Naudé, W. Artificial intelligence vs COVID-19: Limitations, constraints and pitfalls. AI & Society, 35, 761–765 (2020). https://doi.org/10.1007/s00146-020-00978-0
    • This paper provides an early evaluation of the limits of the use of artificial intelligence in medicine during the Covid-19 pandemics. The paper is pessimistic on whether artificial intelligence will be useful during the pandemic, because of both a lack of specific data on COVID-19 and an overwhelming amount of general healthcare data. The author argues that overcoming these constraints will require a careful balance between data privacy and public health, and rigorous human-AI interaction.
  • Paula, A. K., & Schaeferb, M. (2020). Safeguards for the use of artificial intelligence and machine learning in global health. Bulletin of the World Health Organization, 98(4), 282-284.
    • This article outlines challenges that low- and middle-income countries (LMICs) must overcome to develop and deploy artificial intelligence and machine learning innovations. It emphasizes that investments in these innovations by LMICs must be grounded in the realities of their health systems to enable success. The challenges outlined in this piece include: (1) improving the quality and use of data collected, (2) ensuring representation in these processes by marginalized groups, (3) establishing safeguards against bias, and (4) only investing in areas where health systems can operationalize innovations and deliver results.
  • Salathé, M. (2018). Digital epidemiology: what is it, and where is it going? Life sciences, Society and Policy, 14(1). https://doi.org/10.1186/s40504-017-0065-7
    • This seminal article provides a definition for the field of “digital epidemiology” and an outlook of how the field is poised to evolve in the coming years. For those new to the area, this article can serve as a succinct introduction before a more focused exploration into digital epidemiology’s unique ethical considerations.
  • Samerski, S. (2018). Individuals on alert: digital epidemiology and the individualization of surveillance. Life Sciences, Society and Policy, 14(1). https://doi.org/10.1186/s40504-018-0076-z
    • This article provides a critical analysis of how digital epidemiology and the broader “eHealth” movement fundamentally change the notion of health into a constant state of surveillance. It argues that as predictive analytics dominates the discourse around population and individual-level health, we are at risk of entering a state of “modus irrealis” or helpless paralysis due to events that may or may not transpire. The views expressed in this article stand in sharp contrast to digital health proponents such as Dr. Eric Topol, who argue that these advances promote autonomy and self-efficacy.
  • Samuela, G., & Derrick, G. (2020). Defining ethical standards for the application of digital tools to population health research. Bulletin of the World Health Organization, 98(4), 239-244.
    • This article provides a process for ethics governance to be used at higher educational institutions during ex-post reviews of population health AI research. The governance model proposed consists of two levels: (1) the mandated entry of research products into an open-science repository and (2) a sector-specific validation of the research processes and algorithms. Through this ex-post review, the authors believe that the potential for AI-systems to cause harm will be reduced before they are disseminated.
  • Smith, M. J., et al. (2020). Four equity considerations for the use of artificial intelligence in public health. Bulletin of the World Health Organization, 98(4), 290-292.
    • Equity, the absence of avoidable or remediable differences among groups, is a foundational concept in global and public health. In this article, the authors outline four equity considerations when designing and deploying artificial intelligence and public health contexts: (1) the digital divide, (2) algorithmic bias and values, (3) plurality of values across systems, and (4) fair decision-making procedures.
  • Vayena, E., & Madoff, L. (2019). Navigating the ethics of big data in public health. In A. C. Mastroianni, J. P. Kahn& N. P. Kass (Eds.), The Oxford Handbook of Public Health Ethics (pp. 354-367). Oxford University Press.
    • This article provides an overview of the key ethical challenges for the use of big data in public health. They discuss issues such as: (1) privacy, (2) data control and sharing, (3) nonstate actors, (4) harm mitigation, (5) fair distribution of benefits, (6) civic empowerment, and (7) accountability. This article would serve as a useful introduction to those new to the field of public health as the authors ground their discussion around key areas of public health such as health promotion, surveillance, emergency preparedness and response, and comparative effectiveness research.
  • Wahl, B., et al. (2018). Artificial intelligence (AI) and global health: How can AI contribute to health in resource-poor settings? BMJ Global Health, 3(4). http://dx.doi.org/10.1136/bmjgh-2018-000798
    • Much of the discourse around AI in medicine has focused on high-resource settings, which risks further propagating the digital divide between high- and low/middle-income countries. This review is one of the first to shift this discourse and do so in a solution-focused manner. The authors draw attention to several important enablers to AI in low-resource settings such as mobile health, open-source electronic medical record systems, and cloud computing.

Chapter 38. Ethics of AI in Law: Basic Questions (Harry Surden)⬆︎

  • Agrawal, A., et al. (2019). Exploring the impact of artificial intelligence: Prediction versus judgment. Information Economics and Policy, 47, 1-6.
    • This article argues that because prediction allows riskier decisions to be taken, prediction has an impact on observed productivity although it could also increase the variance of outcomes. However, the authors also demonstrate that better prediction may result in different judgements depending on the context and therefore not all human judgment will be a complement to AI. Nonetheless, the authors argue that humans will delegate some decisions to machines even when the decision would be superior with human input.
  • Agrawal, A., et al. (2018).* Prediction machines: The simple economics of artificial intelligence. Harvard Business Review Press.
    • In this book, the authors show how the predictive power of AI can be used in the face of uncertainty, to increase productivity, and to develop strategies. The authors employ an economic framework to explain the impacts of this adoption of AI.
  • Alarie, B., et al. (2018). How artificial intelligence will affect the practice of law. University of Toronto Law Journal, 68(supplement 1), 106-124. https://doi.org/10.3138/utlj.2017-0052
    • This article outlines the current and anticipated impact of AI on the legal profession and legal services. The article tracks the history of legal information and discusses how AI can use legal data to answer legal questions and develop predictive tools for the legal domain. The article suggests that AI could transform how lawyers perform legal work and deliver legal services
  • Angwin J., Larson J. (2016).* Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    • In this article, the authors cite anecdotal and sentencing patterns to argue that algorithms tasked to predict the potential for future criminal activities of a particular person are biased along racial lines.
  • Barocas, S., & Andrew, D. (2016).* Big data’s disparate impact. California Law Review, 104(3), 671. https://doi.org/10.15779/Z38BG31
    • This article examines concerns that flawed or biased data can interfere with the supposed ability of algorithmic methods to eliminate human biases from the decision-making process, through the lens of American antidiscrimination law—more particularly, through Title VII’s prohibition of discrimination in employment. The authors argue that finding a solution to this issue will require more than mitigation of prejudice and bias; it will require a wholesale reexamination of the meanings of “discrimination ” and “fairness”. 
  • Bloch-Wehba, H. (2019). Access to algorithms. Fordham Law Review, 88(4), 1265-1314.
    • This article describes concerns regarding the use of opaque algorithms in the public sector, particularly in healthcare, education, and criminal law enforcement. To address these concerns and promote public accountability and transparency in automated decision-making, the article proposes drawing on freedom of information laws and the constitutional right to freedom of expression.
  • Calo, R. (2018).* Artificial intelligence policy: A primer and roadmap. University of Bologna Law Review, 3(2), 180-218.
    •  The essay aims to help policymakers, investors, scholars, and students understand the contemporary policy environment around artificial intelligence and the key challenges it presents. It aims to provide a basic roadmap of the issues that surround the implementation of AI in the current environment.
  • Casey, B., et al. (2019). Rethinking explainable machines: The GDPR’s “right to explanation” debate and the rise of algorithmic audits in enterprise. Berkeley Technology Law Journal, 34(1), 143-188.
    • This article discusses the interpretation of, and debate surrounding, the General Data Protection Regulation’s “right to explanation.” The article suggests that this right, coupled with the practices of algorithmic auditing and data protection by design, will have sweeping legal and practical implications for the design, testing, and deployment of machine learning systems.
  • Citron,  D. K. (2008).* Technological due process. Washington University Law Review, 8(5), 1249-1313.
    • This article aims to demonstrate how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework for technological due process to ensure that it preserves transparency, accountability, and accuracy of rules in automated decision-making systems. 
  • Coglianese, C., & Lehr, D. (2019). Transparency and algorithmic governance. Administrative Law Review, 71(1), 1-56.
    • This article examines the use of machine learning in government decision-making by inquiring whether the opaqueness of machine learning can be reconciled with the legal principles of governmental transparency. By distinguishing between different types of transparency, the article suggests that the opaqueness of machine learning does not pose a legal barrier to the responsible use of machine learning by governmental authorities.
  • Corbett-Davies, S., et al. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 797-806). Association for Computing Machinery. https://doi.org/10.1145/3097983.3098095
    • This paper argues that the objective of algorithmic fairness should be reframed as an optimization of maximization of public safety while satisfying formal fairness constraints designed to reduce racial disparities.
  • Deeks, A. (2019). The judicial demand for explainable artificial intelligence. Columbia Law Review, 119(7), 1829-1850.
    • This article argues that, in confronting machine learning algorithms in criminal, administrative, and civil cases, judges should demand explanations for algorithmic decisions. The article suggests that if judges demand such explanations they will be able to make a unique contribution to shaping the expectations, rules, and norms in the emerging field of explainable AI.
  • Hacker, P., et al. (2020). Explainable AI under contract and tort law: Legal incentives and technical challenges. Artificial Intelligence and Law, 28(4), 1-25. https://doi.org/10.1007/s10506-020-09260-6
    • This articles discusses the legal rules and discourse concerning the explainability requirements imposed on AI systems. Using case studies from medical diagnostics and corporate law, the article indicates that the notion of explainability extends into legal fields beyond data protection law. The article also presents a technical case study examining the tradeoff between accuracy and explainability.
  • Hervey, M & Lavy, M. (2021). The law of artificial intelligence. Sweet & Maxwell.
    • This book examines how existing civil and criminal law will apply to AI and explores the role of emerging laws designed specifically for AI. Topics include liability arising in connection with the use of AI, the impact of AI on intellectual property, data protection, smart contracts, and the deployment of AI in legal services and the justice system.
  • Kaminski, M. E. (2019).* The right to explanation, explained. Berkeley Technology Law Journal, 34(1), 189-218. https://doi.org/10.15779/Z38TD9N83H
    • This article explores how the EU’s General Data Protection Regulation (GDPR) establishes algorithmic accountability: laws governing decision-making by complex algorithms or AI. It argues that the GDPR provisions on algorithmic accountability, in addition to including a right to explanation (a right to information about individual decisions made by algorithms), could be broader, stronger, and deeper than the preceding requirements of the Data Protection Directive.
  • Kleinberg, J. (2018).* Inherent trade-offs in algorithmic fairness. In Abstracts of the 2018 ACM International Conference on Measurement and Modelling of Computer Systems (pp. 40-40). https://doi.org/10.1145/3219617.3219634
    • This article explores the way classifications done by algorithms create tension between competing notions of what it means for such a classification to be fair to different groups. The authors then present several of the key fairness conditions and the inherent trade-offs between these conditions.
  • Kleinberg, J., et al. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807
    • The article explores how algorithmic classification involves tension between competing notions of what it means for a probabilistic classification to be fair to different groups. After formalizing three fairness conditions that lie at the heart of these debates, the authors show that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Thus, the article argues that key notions of fairness are incompatible with each other, and hence seeks to provide a framework for thinking about the trade-offs between them.
  • Kroll, J. A., et al. 2016.* Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633-706. 
    • This article argues that transparency will not solve the problems of automated decision systems such as returning potentially incorrect, unjustified, or unfair results. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process.
  • Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835-850.
    • This article argues that developers have a responsibility for their algorithms later in use, and that firms should be responsible not only for the value-laden-ness of an algorithm but also for designing who-does-what within the algorithmic decision. Thus, firms developing algorithms are accountable for designing how large a role individuals will be permitted to take in the subsequent algorithmic decision. 
  • Miller, T. (2019).* Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.
    • This paper argues that researchers and practitioners who seek to make their algorithms more understandable should utilize research done in the fields of philosophy, psychology, and cognitive science to understand how people define, generate, select, evaluate, and present explanations, and account for how people employ certain cognitive biases and social expectations towards the explanation process.
  • Mulligan D. and Bamberger, K. (2018).* Saving governance-by-design. California Law Review, 106(3), 697-784.
    • This article argues that “governance-by-design”—the purposeful effort to use technology to embed values—is quickly becoming a significant influence on policy making. Furthermore, the existing regulatory system is fundamentally ill-equipped to prevent technological based governance from subverting public governance.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
    • This book explores and analyses the results generated by Google search algorithms and argues that search algorithms are able to reflect racist biases as the algorithms created for such search engines reflect the biases and values of the people that created them. 
  • Pasquale F. (2015).* The black box society: The secret algorithms that control money and information. Harvard University Press.
    • In this book, Pasquale explores the power of ‘hidden algorithms’. He argues that such algorithms permit self-serving and reckless behavior and how powerful interests abuse the secrecy of these algorithms for profit. Thus, transparency must be demanded of firms, such that they accept as much accountability as they impose on others.
  • Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Harvard University Press.
    • Recalling Isaac Asimov’s Three Laws of Robotics, this book proposes four new laws for governing AI. First, AI should complement professionals, not replace them. Second, AI should not counterfeit humanity. Third, AI should not intensify zero-sum arms races. Fourth, AI must always indicate the identity of their creator(s), controller(s) and owner(s). The book presents examples and case studies in healthcare, education, media, and other domains to support these new laws for the governance of AI.
  • Richards, N. M. (2012).* The dangers of surveillance. Harvard Law Review, 126(7), 1934-1965. 
    • This article aims to explain and highlight the harms of government surveillance. The author uses work from multiple disciplines such as law, history, literature, and the work of scholars in the emerging interdisciplinary field of “surveillance studies,” to define what those harms are and why they matter. 
  • Selbst, A. D., & Barocas, S. (2018).* The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085-1139.
    • In this article, the authors aim to show what makes decisions made by algorithms seem inexplicable, by examining what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. 
  • Speicher, T., et al. (2018). A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2239-2248). Association for Computing Machinery. https://doi.org/10.1145/3219819.3220046
    • This paper aims to explore how to determine what makes one algorithm more unfair than another. The authors aim to use existing inequality indices from economics to measure how unequally the outcomes of an algorithm benefit different individuals or groups in a population. 
  • Surden, H. (2019).* Artificial intelligence and law: An overview. Georgia State University Law Review, 35(4), 1305-1337.
    • This paper aims to provide a concrete survey of the current applications and uses of AI within the context of the law, without straying into discussions about AI and law that are futurist in nature. It aims to highlight a realistic view that is rooted in the actual capabilities of AI technology as it currently stands. 
  • Susskind, R. E., & Susskind, D. (2015). The future of the professions: How technology will transform the work of human experts. Oxford University Press.
    • The authors argue that our current professions are antiquated, opaque and no longer affordable, and that the expertise of their best is enjoyed only by a few, and thus present an exploration into the ethical issues that arise when machines can out-perform human beings at most tasks. The authors explore how technological change will affect prospects for employment, who should own and control online expertise, and what tasks should be reserved exclusively for people.
  • Strandburg, K. J. (2019). Rulemaking and inscrutable automated decision tools. Columbia Law Review, 119(7), 1851-1886.
    • This article discusses the role of explanation in developing criteria for automated government decision making and rulemaking. The article analyzes whether, and how, the inscrutability of automated decision tools undermines the traditional functions of explanation in rulemaking. It contends that providing explanations about decision tool design, function, and use are helpful measures and can perform some of these traditional functions.
  • Turner, J. (2018). Robot rules: Regulating artificial intelligence. Springer.
    • This book addresses the legal and ethical frameworks for regulating activities, rights, and responsibilities in connection with AI actors. The book discusses who is, and should be, liable for the actions of AI and considers the possibility of granting rights to AI entities. The book suggests that new legal institutions and structures are needed to confront these challenges.

Chapter 39. Beyond Bias: “Ethical AI” in Criminal Law (Chelsea Barabas)⬆︎

  • Barocas, S., & Andrew, D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671.  https://doi.org/10.15779/Z38BG31 
    • This article argues that algorithms should not be taken as sufficient tools for making impartial and fair decisions due to the impacts of pervasive biases in the data they use. It draws on the disparate impact doctrine developed in American anti-discrimination law to clarify the implicit bias present in algorithms. The authors highlight the significant difficulties in addressing algorithmic discrimination, including both the technical challenges within data mining practices and the legal challenges beyond. As a result, they conclude that “fairness” and “discrimination” may need to be entirely re-examined.
  • Benjamin, R. (2016).* Catching our breath: Critical race STS and the carceral imagination. Engaging Science, Technology, and Society2, 145-156.
    • This article uses science and technology studies along with critical race theory to examine the proliferation and intensification of carceral approaches to governing human life. The authors argue in favour of an expanded understanding of “the carceral” that extends beyond the domain of policing to include forms of containment that make innovation possible in the contexts of health and medicine, education and employment, border policies and virtual realities. 
  • Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81, 149-159. http://proceedings.mlr.press/v81/binns18a.html
    • This article discusses research on fair AI and algorithmic decision making by drawing parallels to contemporary political philosophy. By considering various philosophical accounts of discrimination and egalitarianism, the author delineates how political philosophy can shed light on AI fairness research where data-driven and algorithmic methods do not. The author argues that fairness, narrowly construed as a property of algorithms and data, does not adequately address the context-sensitive questions of justice surrounding these sociotechnical systems.
  • Bosworth, M. (2019). Affect and authority in immigration detention. Punishment & Society, 21(5), 542-559.
    • This article considers the relationship between authority and affect by drawing on a long-term research project across a number of British Immigration Removal Centers (IRCs). This article argues that staff authority rests on an abrogation of their self rather than engagement with the other. This is in contrast to much criminological literature on the prison, which advances a liberal political account in which power is constantly negotiated and based on mutual recognition.
  • Brown, M., & Schept, J. (2017).* New abolition, criminology and a critical carceral studies. Punishment & Society, 19(4), 440-462.
    • This article argues that criminology has been slow to open up a conversation about decarceration and abolition. In this article, the authors advocate for and discuss the contours of critical carceral studies, a growing interdisciplinary movement for engaged scholarly and activist production against the carceral state.
  • Corbett-Davies, S., et al. (2017).* Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 797-806. Association for Computing Machinery. https://doi.org/10.1145/3097983.3098095
    • The article aims to reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. The authors show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds.
  • Delbert, E. S. (1995).* Lies, damn lies, and arrest statistics. Center for the Study and Prevention of Violence.
    • This paper argues that most research on the parameters of a criminal career that utilizes arrest data to estimate the underlying behavioral dynamics of criminal activity is flawed. The author argues that this generalization of findings from analyses of arrest records to the underlying patterns and dynamics of criminal behavior and characteristics of offenders in the general population are likely to lead to incorrect conclusions, ineffective policies and practices and ultimately undermine our efforts to understand, prevent and control criminal behavior. 
  • Ferguson, A. G. (2016).* Policing predictive policing. Washington University Law Review, 94(5), 1109-1189.
    • This article examines predictive policing’s evolution and aims to provide a practical and theoretical critique of this new policing strategy that promises to prevent crime before it happens. Building on insights from scholars who have addressed the rise of risk assessment throughout the criminal justice system, this article provides an analytical framework to police new predictive technologies.
  • Green, B., & Chen, Y. (2019). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 90-99). Association for Computing Machinery. https://doi.org/10.1145/3287560.3287563
    • Despite the wealth of research developing algorithmic tools for risk assessment, very little work has focused on evaluating the operation of these tools by actual human decision-makers. Through an online laboratory experiment, this paper quantifies how human operators assess defendant crime risk when guided by an algorithmic risk score. Study participants were unable to reach the algorithm’s own performance even when exposed to its predictions. Furthermore, participants could not discern when they were making high-quality predictions and discriminated more against Black defendants when shown the algorithm’s scores.
  • Harcourt, B. E. (2008).* Against prediction: Profiling, policing, and punishing in an actuarial age. University of Chicago Press.
    • In this book, the author argues prediction tools increase the overall amount of crime in society, depending on the relative responsiveness of the profiled populations to heightened security. The author proposes a turn to randomization in punishment and policing, against prediction.
  • Hoffmann, A. L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900-915. https://doi.org/10.1080/1369118X.2019.1573912
    • In this study, Hoffmann critically analyzes three limits of fairness and antidiscrimination discourse in capturing the social injustices latent in Big Data. Firstly, this research is narrowly centered on the law’s concern with individual perpetrators, instead of addressing broader systemic injustices. Secondly, antidiscrimination discourse is especially focused on the notion of disadvantage on singular axes, such as race, without considering intersectional injustices. Thirdly, fairness is concerned with distributions of resources and opportunities, without acknowledging how social infrastructure enables the utilization of these resources. 
  • Huq, A. Z. (2018). Racial equity in algorithmic criminal justice. Duke Law Journal, 68(6), 1043-1134.
    • This article considers the interaction of algorithmic tools for predicting violence and criminality that are increasingly deployed in policing, bail, and sentencing, with the enduring racial dimensions of the criminal justice system. The author then argues that a criminal justice algorithm should be evaluated in terms of its long-term, dynamic effects on racial stratification.
  • Jefferson, B. J. (2017). Digitize and punish: Computerized crime mapping and racialized carceral power in Chicago. Environment and Planning D: Society and Space, 35(5), 775-796.
    • This article aims to put critical geographic information systems theory into discussion with critical ethnic studies and thus argue that CLEARmap, the Chicago police’s digital mapping application, does not passively “read” urban space, but provides ostensibly scientific ways of reading and policing negatively racialized fractions of surplus labor in ways that reproduces, and in some instances extends the reach of carceral power. 
  • Kleinberg, J., et al. (2018). Human decisions and machine predictions. The Quarterly Journal of Economics, 133(1), 237-293. https://doi.org/10.1093/qje/qjx032
    • This article investigates the extent to which predictions made by AI systems can outperform humans in judicial contexts. Specifically, the authors consider historical bail decisions made in New York and build algorithmic models of how judges balance the outcomes of incarceration and release. They find that their models can reduce failure-to-appear and crime rates by up to 24.7% with no change in jailing rates, or jailing rates by up to 41.9% with no change in crime rates. Additionally, they demonstrate that their method achieves these improvements while simultaneously improving racial parity. 
  • Kleinberg, J., et al. (2018).* Algorithmic fairness. AEA Papers and Proceedings, 108, 22-27.
    • This paper proposes that concerns that algorithms may discriminate against certain groups that have led to numerous efforts to ‘blind’ the algorithm to race are misleading and may do harm. Thus, the authors argue that equity preferences can change how the estimated prediction function is used (e.g., different threshold for different groups) but the function itself should not change.
  • Kleinberg, J., et al. (2018). Discrimination in the age of algorithms. Journal of Legal Analysis. https://doi.org/10.1093/jla/laz001
    • This paper argues that the use of algorithms will make it easier to examine and interrogate the entire legal process and therefore identify whether discrimination has occurred.
  • Kleinberg, J., et al. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv:1609.05807v2
    • The article explores how algorithmic classification involves tension between competing notions of what it means for a probabilistic classification to be fair to different groups. After formalizing three fairness conditions that lie at the heart of these debates, the authors show that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Thus, the article argues that key notions of fairness are incompatible with each other, and hence seeks to provide a framework for thinking about the trade-offs between them.
  • Lyon, D. (2014). Surveillance, Snowden, and big data: Capacities, consequences, critique. Big Data & Society, 1(2). https://doi.org/10.1177%2F2053951714541861
    • This article explores the extent the Snowden disclosures indicated that Big Data practices are becoming increasingly important to surveillance, and if Big Data is gaining ground in this area, then how this indicates changes in the politics and practices of surveillance. The author analyses the capacities of Big Data and their social-political consequences and then comments on the kinds of critique that may be appropriate for assessing and responding to these developments.
  • Mayson, S. G. (2018). Bias in, bias out. Yale Law Journal, 128(8), 2218-2300.
    • This paper argues strategies currently put in place to mitigation algorithmic discrimination are at best superficial and at worst counterproductive because the source of racial inequality in risk assessment lies neither in the input data, nor in a particular algorithm, nor in algorithmic methodology per se. The problem is the nature of prediction itself, since all prediction looks to the past to make guesses about future events. In a racially stratified world, any method of prediction will project the inequalities of the past into the future.
  • Mohamed, S., et al. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(4), 659-684. https://doi.org/10.1007/s13347-020-00405-8
    • This paper explores the hidden power dynamics underlying AI development through the lens of coloniality in science, specifically, critical decolonial theory. The authors illustrate how algorithms involve and lead to the oppression, exploitation, and dispossession of the vulnerable. To guide decolonialized AI design, they argue that historical lessons of resistance lead to three key tactics: supporting critical technical practices, establishing reciprocal engagements between the powerful and powerless, and strengthened political communities in AI.
  • Muhammad, K. G. (2008).* The condemnation of blackness. Harvard University Press.
    • This article reveals the influence ideas such as deeply embedded notions of black people as a dangerous race of criminals by explicit contrast to working-class whites and European immigrants, the idea of black criminality, and African Americans’ own ideas about race and crime have had on urban development and social policies.
  • Ogbonnaya-Ogburu, I. F., et al. (2020). Critical race theory for HCI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-16). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376392
    • Seeking to guide the development of racially diverse sociotechnical systems, this paper calls for the introduction of critical race theory insights to the field of human-computer interaction. The authors provide an overview of the central tenets of critical race theory, such as the every-day universality of racism, and identifies key areas for their incorporation. Additionally, the authors apply the storytelling methodology of critical race theory to describe their own experiences as racialized people performing computational research. From this discussion, the authors conclude with a call to anti-racist action for HCI practitioners.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://nyupress.org/9781479837243/algorithms-of-oppression/
    • This book proposes that search engines are neither benign nor neutral, and instead operate to conceal and amplify social biases. The author unveils the distorted representation of women and racial minorities on platforms like Google’s image search, and discusses the harms inflicted on these communities during the creation and use of search engines. The author argues that the private interests of a few monopolistic sites enable this pattern of digital oppression.
  • Pleiss, G., et al. (2017). On fairness and calibration. Advances in Neural Information Processing Systems, 30, 5680-5689.
    • This article investigates the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. The article argues that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and shows that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. 
  • Roberts, D. E. (2003). The social and moral cost of mass incarceration in African American communities. Stanford Law Review, 56(5), 1271-1306.
    • While many studies focus on the potential causes of racial discrepancies in the American prison system, this article instead examines the costs inflicted by the mass incarceration of African Americans. It considers three of parts of Black communities that are harmed through mass imprisonment: social networks, social norms, and social citizenship. The author contends that these community-level harms illustrate the disproportionate attention paid to criminality compared to other risks of incarceration, such as the political insubordination of African Americans. The author proposes that the justifications for punishment need to be radically rethought in this context.
  • Selbst, A. D. (2017). Disparate impact in big data policing. Georgia Law Review, 52(1), 109-195.
    • This paper argues that the degree to which predictive policing systems incur discriminatory results is unclear to the public and to the police themselves, largely because there is no incentive in place for a department focused solely on “crime control” to spend resources asking the question. Thus, the authors propose a new regulatory proposal centered on “algorithmic impact statements”, to mitigate the issues created by predictive systems.
  • Stevenson, M. (2018).* Assessing risk assessment in action. Minnesota Law Review 103(1), 303-384.
    • This article documents the impacts of risk assessment in practice and argues that risk assessment had no effect on racial disparities in pretrial detention once differing regional trends were accounted for. This is shown using data from more than one million criminal cases, highlighting that a 2011 law making risk assessment a mandatory part of the bail decision led to a significant change in bail setting practice, but only a small increase in pretrial release. 

Chapter 40. “Fair Notice” in the Age of AI (Kiel Brennan-Marquez)⬆︎

  • Atkinson, K., et al. (2020). Explanation in AI and law: Past, present and future. Artificial Intelligence, 289, 103387. https://doi.org/10.1016/j.artint.2020.103387
    • Atkinson and colleagues offer a review of the different techniques that are used to explain automated decisions made in legal contexts, describing how these tools have developed, and flagging gaps that remain. They argue that law is an exemplary context in which to study the problem of AI explainability due to the high standards of transparency required for the legal context. 
  • Brennan-Marquez, K. (2017).* Plausible cause: Explanatory standards in the age of powerful machines. Vanderbilt Law Review, 70(4), 1249-1302.
    • This article argues that statistical accuracy, though important, is not the crux of explanatory standards. The value of human judges lies in their practiced wisdom rather than analytic power.  The author replies to a common argument against replacing judges, that claims intelligent machines are not (yet) intelligent enough to take up the mantle, by highlighting that powerful intelligent algorithms currently exist, and furthermore judging is not about intelligence, it’s about prudence.
  • Brennan-Marquez, K. (2019).* Extremely broad laws. Arizona Law Review, 61(3), 641-666.
    • This article argues that extremely broad laws offend due process because they afford state officials practically boundless justification to interfere with private life. Thus, the article explores how courts might tackle the breadth problem in practice—and ultimately suggests that judges should be empowered to hold statutes “void-for-breadth.”
  • Bushway, S. D. (2020). “Nothing is more opaque than absolute transparency”: The use of prior history to guide sentencing. Harvard Data Science Review, 2(1). https://doi.org/10.1162/99608f92.468468af
    • In this commentary, Bushway responds to Rudin and colleagues (2020), arguing that their focus on transparency and advocacy for simplified risk algorithms ignores the fact that using criminal histories for these predictions is not only unfair, but also unreliable. Bushaway makes this case by critiquing how past sentencing reforms have sought to standardize sentencing by removing judicial discretion based upon the flawed assumption that a criminal history is a reliable indicator of human behaviour.
  • Citron, D. K. (2007). Technological due process.* Washington University Law Review, 85(6), 1249-1314.
    • This article aims to demonstrate how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework for technological due process to ensure that it preserves transparency, accountability, and accuracy of rules in automated decision-making systems.
  • Citron, D. K., & Pasquale, F. (2014).* The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1-34.
    • This article argues that though automated scoring may be pervasive and consequential, it is also opaque and lacking oversight. Thus, automated scoring must be implemented alongside protections, such as testing scoring systems to ensure their fairness and accuracy, otherwise systems could launder biased and arbitrary data into powerfully stigmatizing scores.
  • Cohen, J. E. (2012). Configuring the networked self: Law, code, and the play of everyday practice. Yale University Press.
    • This book argues that legal and technical rules governing flows of information are out of balance, as flows of cultural and technical information are overly restricted, while flows of personal information often are not restricted at all.
  • Crawford, K., & Schultz, J. (2014).* Big data and due process: Toward a framework to redress predictive privacy harms. Boston College Law Review, 55(1), 93-128.
    • This article highlights how Big Data has vastly increased the scope of personally identifiable information and how poor execution of Big Data methodology may create additional harms by rendering inaccurate profiles that nonetheless impact an individual’s life and livelihood. Thus, the article argues for a mitigation of predictive privacy harms through a right to procedural data due process.
  • Delacroix, S. (2018). Computer systems fit for the legal profession? Legal Ethics, 21(2), 119-135.
    • This article argues against the conception that wholesale automation is both legitimate and desirable, provided it improves the quality and accessibility of legal services by presenting the claim that this comes at the cost of moral equality. In response, the authors propose designing systems that better enable legal professionals to live up to their specific responsibility by ensuring that they are profession specific, in contrast to generalized automation.
  • Ferguson, A. G. (2019). The rise of big data policing: Surveillance, race, and the future of law enforcement. NYU Press.
    • This book discusses the consequences of big data and algorithm-driven policing and its impact on law enforcement. It then explores how technology will change law enforcement and its potential threat to the security, privacy, and constitutional rights of citizens.
  • Froomkin, A. M., et al. (2019). When AIs outperform doctors: Confronting the challenges of a tort-induced over-reliance on machine learning. Arizona Law Review, 61(1), 33-100.
    • This article argues that currently a combination of human and machine may be more effective than either alone in medical diagnoses but in time machines will improve and become more effective, thus creating overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Thus, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings
  • Grimmelmann, J., & Westreich, D. (2017).* Incomprehensible discrimination. California Law Review Online, 7, 164-177.
    • This article explores and replies to Solon Barocas and Andrew Selbst’s argument in Big Data’s Disparate Impact concerning the use of algorithmically derived models that are both predictive of a legitimate goal and have a disparate impact on some individuals. The authors agree that these models have a potential impact on antidiscrimination law, but argue for a more optimistic stance: that the law already has the doctrinal tools it needs to deal appropriately with cases of this sort.
  • Hacker, P., et al. (2020). Explainable AI under contract and tort law: Legal incentives and technical challenges. Artificial Intelligence and Law, 28(4), 415–439. https://doi.org/10.1007/s10506-020-09260-6
    • Hacker and colleagues argue that the law incentivizes the adoption of explainable AI in ways that are not always obvious. They show that explainable AI is incentivized as a way of avoiding liability despite trade-offs in accuracy through case studies in medicine and corporate acquisitions. The potential for legally mandated explainable AI in certain settings would shift how certain professionals are required to understand their legal obligations, and the extent to which they can recognize risks in advance of an adverse outcome.
  • Karsai, K. (2020). Algorithmic decision making and issues of criminal justice—A general approach. In C. Dumitru (Ed.), In honorem Valentin Mirisan. Ganduri, studii si institutii (pp. 146-161). Universul Juridic SRL. https://papers.ssrn.com/abstract=3612106
    • In this book chapter, Karsai outlines basic concepts relevant to the use of algorithmic decision-making systems in criminal justice in an effort to inform legal stakeholders. The author argues that both lawyers and lawmakers must engage with the socio-legal implications of automated decision making in criminal justice because the use of data-driven technologies in criminal justice continues to expand.
  • Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14-29. https://doi.org/10.1080/1369118X.2016.1154087
    • This paper explores and expands upon current thinking about algorithms and considers how best to research them in practice. Concepts such as the importance of algorithms in shaping social and economic life, how they are embedded in wider socio-technical assemblages, and challenges that arise when researching algorithms are explored. 
  • Manes, J. (2017).* Secret law. Georgetown Law Journal, 106(3), 803-870.
    • This article aims to unpack the underlying normative principles that both militate against secret law and motivate its widespread use. By investigating the tradeoff between democratic accountability, individual liberty, separation of powers, and pragmatic national security purposes created by secret law, this article proposes a systematic rubric for evaluating particular instances of secret law.
  • Manes, J. (2019). Secrecy & evasion in police surveillance technology. Berkeley Technology Law Journal, 34, 503.
    • This article examines the anti-circumvention argument for secrecy which claims that disclosure of police technologies would allow criminals to evade the law. This article then argues that this argument permits far more secrecy than it can justify, and finally proposes specific reforms to circumscribe laws that currently authorize excessive secrecy in the name of preventing evasion.
  • Markovic, M. (2019). Rise of the robot lawyers. Arizona Law Review, 61(2), 325-350.
    • This article argues against the claim that lawyers will be displaced by artificial intelligence on both empirical and normative grounds. This argument is developed on the following grounds: first, artificial intelligence cannot handle the abstract nature of legal tasks, second, the legal profession has grown and benefited from technology, rather than being challenged by it. Finally,  even if large-scale automation of legal work were possible, core societal values would counsel against it.
  • Metcalf, J., & Crawford, K. (2016). Where are human subjects in big data research? The emerging ethics divide. Big Data & Society, 3(1). https://doi.org/10.1177/2053951716650211
    • Against the background of a proposal for major revisions to the Common Rule—the primary regulation governing human-subjects research in the USA—being under consideration for the first time in decades, this article argues that data science should be understood as continuous with social sciences in regard to the stringency of the ethical regulations that govern it since the potential harms of data science research are unpredictable.
  • Pasquale F. (2015).* The black box society: The secret algorithms that control money and information. Harvard University Press.
    •  In this book, Pasquale explores the power of ‘hidden algorithms’. He argues that such algorithms permit self-serving and reckless behavior and how powerful interests abuse the secrecy of these algorithms for profit. Thus, transparency must be demanded of firms, such that they accept as much accountability as they impose on others.
  • Pasquale, F. (2019).* A rule of persons, not machines: The limits of legal automation. George Washington Law Review, 87(1), 1-55.
    • This article argues that legal automation cannot replace human legal practice as it can elude or exclude important human values, necessary improvisations, and irreducibly deliberative governance – particularly, software cannot replicate narratively intelligible communication from persons and for persons. Thus in order to preserve accountability and a humane legal order, persons, not machines, are required in the legal profession. 
  • Re, R. M., & Solow-Niederman, A. (2019). Developing artificially intelligent justice. Stanford Technology Law Review, 22(2), 242-289.
    • This article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large, particularly in areas where “equitable justice,” or discretionary moral judgment is most significantly exercised. In contrast, AI adjudication would promote “codified justice” which promotes standardization above discretion. 
  • Ross, L. D. (2020). Legal proof and statistical conjunctions. Philosophical Studies. https://doi.org/10.1007/s11098-020-01521-z
    • In this article, Ross discusses the extent to which statistical evidence should form the basis of a legal outcome. Problematizing dominant theories which hold that statistics should not form the basis of legal verdicts, Ross suggests that multiple pieces of statistical evidence ought to be admissible as reliable evidence in legal proceedings. Ross concludes by suggesting that qualitative narrative evidence is more valuable than statistical evidence in courts, not because it is of a higher quality, but because it is inaccurately perceived as more reliable by the public.
  • Rudin, C., et al. (2020). The age of secrecy and unfairness in recidivism prediction. Harvard Data Science Review2(1). https://doi.org/10.1162/99608f92.6ed64b30 
    • Rudin and colleagues suggest that debates about the use of algorithmic technologies to predict recidivism have been fruitless because of competing and contradictory definitions of fairness. Through an analysis of the COMPAS algorithm used to predict recidivism in the United States, the authors show that non-transparency has led to misinterpretations of the model and hampered informed conversations about its fairness. They argue that transparency is a requisite for procedural fairness that has been neglected in these conversations in the past and call for a simplified form of risk assessment based upon age and criminal past. 
  • Selbst, A. D., & Barocas, S. (2018).* The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085-1140.
    • In this article, the authors aim to show what makes decisions made by algorithms seem inexplicable, by examining what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation.
  • Solove, D. J. (2011). Nothing to hide: The false tradeoff between privacy and security. Yale University Press.
    • In this book, Solove argues against the claim that society has a duty to sacrifice privacy for security by exposing the fallacies and flaws of these claims, then arguing that protecting privacy isn’t fatal to security measures; it merely involves adequate oversight and regulation.
  • Streel, A. D., et al. (2020). Explaining the black box: When law controls AI. Centre on Regulation in Europe. http://www.crid.be/pdf/public/8578.pdf 
    • This report discusses the issue of AI explainability relative to the recommendations of the European High-Level Expert Group on AI and a plan set out by the European Commission in its White Paper on AI. Streel and colleagues begin by outlining different legal and scientific definitions of explainability before relating them to the proposed European regulations and how they might be achieved in practice.
  • Tortora, L., et al. (2020). Neuroprediction and A.I. in forensic psychiatry and criminal justice: A neurolaw perspective. Frontiers in Psychology, 11, 220. https://doi.org/10.3389/fpsyg.2020.00220 
    • This article by Tortora and colleagues explores the potential for a host of neuro-imaging techniques powered by AI – referred to as “AI neuroprediction” – for risk assessment in the criminal justice. They review academic literature on these techniques to consider the potential for their application in predicting future violence or the likelihood of rearrest. AI neuroprediction in criminal justice would have many implications for procedural fairness, as justice outcomes may be influenced by the potential occurrence of an unspecified crime at some point in the future – a clear violation of the principle. 

Chapter 41. AI and Migration Management (Petra Molnar)⬆︎

  • Ahmad, N. (2020). Refugees and algorithmic humanitarianism: Applying artificial intelligence to RSD procedures and immigration decisions and making global human rights obligations relevant to AI governance. International Journal on Minority and Group Rights. https://doi.org/10.1163/15718115-BJA10007
    • Ahmad argues that the introduction of AI in humanitarian work has occurred “without ethics, justice, and morality.” From a human rights perspective, the author laments that humanitarian AI has been adopted without adequate regard for individual privacy, nor for the various strategic (ab)uses of data extracted from migrant populations. Ahmad calls for a “reprogramming” of humanitarian AI to better align with human rights norms and promote more sustainable uses of technology in the field.
  • Austin, L. (2018, July 9). We must not treat data like a natural resource. The Globe and Mailhttps://www.theglobeandmail.com/opinion/article-we-must-not-treat-data-like-a-natural-resource/
    • In this opinion piece, Austin argues that framing data transformation as a balance between economic innovation and privacy provides a narrow framework for understanding what is at stake. Not only are these values not necessarily in tension, but the focus on privacy and ownership language fails to capture implications for the public sphere, human rights, and social interests. Austin proposes a better framing – one that goes beyond data as an extractable resource and recognizes data as a new informational dimension to individual and community life.
  • Azizi, S., & Yektansani, K. (2020). Artificial intelligence and predicting illegal immigration to the USA. International Migration, 58(5), 183–193. https://doi.org/10.1111/imig.12695
    • Noting the prevalence of irregular migration into the United States, Azizi and Yektansani argue that it is “essential to predict whether visa applicants overstay their visas.” The authors apply machine learning techniques to a set of pre-immigration variables and claim to predict the legal status of 80 percent of Mexicans coming to the United States. This paper offers an example of how ethically and legally dubious artificial intelligence techniques can be used to discriminate against vulnerable immigrants.
  • Barocas, S., & Selbst, A. D. (2016).* Big data’s disparate impact. California Law Review, 104(3), 671-732.
    • This essay examines data bias concerns through the lens of American discrimination law. In light of algorithms frequently inheriting prejudices of prior decision makers and difficulties identifying the source of the bias of explaining the bias to a court, the author looks to disparate impact doctrine in laws surrounding discrimination in the workplace to identify potential remedies for the victims of data mining. The author underscores that finding a solution to Big Data’s disparate impact requires re-examining the meanings of “discrimination” and “fairness” in addition to efforts to eliminate prejudice and bias. 
  • Beduschi, A. (2020). International migration management in the age of artificial intelligence. Migration Studies. https://doi.org/10.1093/migration/mnaa003
    • Pointing to the early-stage use of AI in immigration and asylum determinations in Canada and Germany, Beduschi predicts that AI will affect migration management along three primary axes: expanding power gaps between states on the world stage; modernising the migration management practices of states and international organizations; and bolstering discourses of evidence-based immigration and border management. The author concludes by warning policymakers against adopting AI technologies without understanding their legal and ethical implications.
  • Benvenisti, E. (2018). Upholding democracy amid the challenges of new technology: What role for the law of global governance? European Journal of International Law, 29(1), 9-82.
    • This article describes how law has evolved with the growing need for accountability of global governance bodies and analyzes why legal tools are ill-equipped to address new modalities of governance based on new information and communication technologies and automated decision making using raw data. Benevisti argues that the law of global governance extends beyond ensuring accountability of global governance bodies and serves to protect human dignity and the viability of the democratic state. 
  • Carens, J. (2013). The ethics of immigration. Oxford University Press.
    • This book explores how contemporary immigration issues present practical problems for Western democracies while challenging how the concepts of citizenship and belonging, rights and responsibilities, as well as freedom and equality are understood. The author uses the moral framework of liberal democracy to propose that a commitment to open borders is necessary to uphold values of freedom and equality. 
  • Chambers, P., & Mann, M. (2019). Crimmigration in border security? Sorting crossing through biometric identification at Australia’s international airports. In P. Billings (Ed.), Crimmigration in Australia: Law, politics, and society (pp. 381–404). Springer. https://doi.org/10.1007/978-981-13-9093-7_16
    • Chambers and Mann examine the use of biometric identification in Australia’s international airports. They suggest the criminological lens of ‘crimmigration’ is not an apt way to understand the function creep of biometric technologies like fingerprint scanning and facial recognition in airports. Rather, the authors argue that the concept of surveillance capitalism reframes these practices as the displacement of liberal democratic values in favour of “surveillance and security aligned with global capitalism.”
  • Côté-Boucher, K. (2020). Border frictions: Gender, generation and technology on the frontline. Routledge.
    • Côté-Boucher describes how surveillance technologies, including artificial intelligence, have become central to managing the flow of goods and people at the Canadian border. Using ethnographic methods and policy analysis, the author explores the proliferation of surveillance technology, “the fraught circulation of data,” the role of labour unions, and the gendered and generationally inflected professional identities of border agents. In this way, she traces a shift at the border from an economically oriented customs agency to a security-oriented police force.
  • Crisp, J. (2018). Beware the notion that better data lead to better outcomes for refugees and migrants. Chatham House.
    • This article explores the implications of data collection, analysis, and dissemination among states and international organizations in migration governance. Crisp challenges the notion that more data lead to better migration policies. The author stresses that while data collection and analysis may produce insights into migrant needs, movement patterns, and socio-economic conditions, there are important challenges related to confidentiality, information security, and the potential for abuse. The author warns against the adoption of technocratic and apolitical approaches to humanitarian aid in which data collection supersedes the imperative of ensuring the humane treatment of migrants and refugees.
  • Csernatoni, R. (2018). Constructing the EU’s high-tech borders: FRONTEX and dual-use drones for border management. European Security, 27(2), 175-200.
    • This article examines the EU’s strategy to develop technologies such as aerial surveillance drones for border management and security. The author contends that the normalization of drone use at the border-zone embodies a host of ethical and legal implications and falls within a broader European securitized approach to migration. The article explores how this “dronisation” is presented as a technical panacea for the consequences of failed irregular migration management policies and creates further opportunities for exploitation of vulnerable migrants. 
  • Farraj, A. (2010). Refugees and the biometric future: The impact of biometrics on refugees and asylum seekers. Columbia Human Rights Law Review, 42(3), 891-941.
    • This paper explores the impacts of biometric technologies on refugees and asylum seekers. The paper surveys the various ways in which biometrics are used and explores privacy implications, comparing standards and protections laid out by U.S. and EU law. The author underscores the importance of utilizing biometrics to protect refugees and asylum seekers and that their well-being is furthered by the collection, storage, and utilization of their biometric information. 
  • Hall, A. (2017). Decisions at the data border: Discretion, discernment and security. Security Dialogue, 48(6), 488–504. https://doi.org/10.1177/0967010617733668
    • This article focuses on how interactions between algorithms and analysts shape decisions about border security. Hall draws on interviews with European data processors to argue that discretion remains “an uncertain visual practice oriented to seeing and authorizing what is there.” However, Hall also shows that automation in border security upends how security institutions manage the relationship between general rules and individual judgement by prioritizing inflexible policies over the particular context of a given traveller.
  • Helbing, D., et al. (2019).* Will democracy survive big data and artificial intelligence? In D. Helbing (Eds.), Towards digital enlightenment (pp. 73-98). Springer.
    • This article examines how the “data revolution” and widespread automation of data analysis threaten to undermine core democratic values if basic rights of citizens are not protected. The authors argue that Big Data, automation, and nudging should not be used to incapacitate citizens or control behaviors, and propose various fundamental principles derived from democratic societies that should guide the use of Big Data and AI. 
  • Johns, F. (2017). Data, detection, and the redistribution of the sensible in international law. American Journal of International Law, 111(1), 57-103.
    • This article explores how technology changes and mediates the jurisdiction of international law and international institutions such as the UNHCR. The author surveys changes in international legal and institutional work to highlight the distributive implications of automation in shaping allocations of power, competence, and capital. The author claims that technologically advanced modes of data gathering and analysis and the introduction of machine learning results in new configurations of inequality and international institutional work that fall outside the scope of existing international legal thought, doctrine, and practice.
  • Jupe, L. M., & Keatley, D. A. (2020). Airport artificial intelligence can detect deception: Or am I lying? Security Journal, 33(4), 622–635. https://doi.org/10.1057/s41284-019-00204-7 
    • In this article, Jupe and Keatley argue that the use of AI lie-detectors as part of the European Union-funded iBorderCrtl initiative, which among other AI techniques relies on facial micro-expression detection for airport security “is naïve and misinformed.” They claim the adoption of such techniques is unwarranted given a lack of empirical research demonstrating that micro-expressions are a reliable and valid method of detecting deception.
  • Lepri, B., et al. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611-627.
    • This article provides an overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making. The authors underscore the cruciality and urgency to engage multi-disciplinary teams of researchers, policymakers, practitioners, and citizens to co-develop and evaluate algorithmic decision-making processes designed to maximize fairness and transparency to support democracy and development.  
  • Liu, H. Y., & Zawieska, K. (2017).* A New Human Rights Regime to Address Robotics and Artificial Intelligence. In 2017 Proceedings of the 20th International Legal Informatics Symposium (pp. 179-184). Oerterreichische Computer Gesellschaft.
    • This paper examines how a declining human ability to control technology suggests a declining power differential and possibility of inverse power relations between humans and AI. The authors explore how this potential inversion of power impacts the protection of fundamental human rights, and propose that the opacity of potentially harmful AI systems risks eroding rights-based responsibility and accountability mechanisms. 
  • Maas, M. M. (2019).* International law does not compute: Artificial intelligence and the development, displacement or destruction of the global legal order. Melbourne Journal of International Law, 20, 29-57.
    • This paper draws upon techno-historical scholarship to assess the relationship between new technologies and international law. The paper aims to demonstrate how new technologies change legal situations both directly, by creating new entities and enabling new behavior, as well as indirectly by shifting incentives or values. The author proposes that technically and politically disruptive features of AI threaten to destroy key areas of international law that suggests a risk of obsolescence of distinct international legal regimes. 
  • Magnet, S. (2011). When biometrics fail: Gender, race, and the technology of identity. Duke University Press.
    • This book analyzes the state use of biometrics to control and classify vulnerable marginalized populations and track individuals beyond national territorial boundaries. The author explores cases of failed biometrics to demonstrate how these technologies work differently, and fail more often, on women, racialized populations, and people with disabilities, and stresses that these failures result from biometric technologies falsely assuming that human bodies are universal and unchanging over time. 
  • McCarroll, E. (2019). Weapons of mass deportation: Big data and automated decision-making systems in immigration law notes. Georgetown Immigration Law Journal, 34(3), 705–732.
    • This article argues that the present use of automated decision-making (ADM) systems in immigration enforcement is highly problematic under American and international law. Detailing to the ongoing use of risk classifications and automated surveillance by Immigration and Customs Enforcement in the United States, the author raises concerns about discrimination, non-transparency, and political manipulation and puts forth four key policy recommendations for the legal use of ADM in immigration. McCarroll warns that, while these practices disproportionately impact marginalized communities, they erode civil liberties on the whole.
  • McGregor, L., et al. International Human Rights as a Framework for Algorithmic Accountability (2019). International and Comparative Law Quarterly, 68(2), 309-343.
    • This article seeks to explore the potential human rights harms caused by the use of algorithms in decision-making. The authors analyze how international human rights law provides a framework for shared understanding and means of assessing harm while dealing with multiple actors/forms of responsibility and applies across the full algorithmic life cycle from conception to deployment. 
  • Molnar, P., & Gill, L. (2018). Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration and refugee system. University of Toronto’s International Human Rights Program (IHRP) at the Faculty of Law and the Citizen Lab at the Munk School of Global Affairs and Public Policy, with support from the IT3 Lab at the University of Toronto. https://it3.utoronto.ca/wp-content/uploads/2018/10/20180926-IHRP-Automated-Systems-Report-Web.pdf
    • This report highlights the human rights implications of using algorithmic and automated technologies for administrative decision-making in Canada’s immigration and refugee system. Molnar and Gill survey current and proposed uses of automated decision-making, illustrate how decisions may be affected by new technologies, and develop a human rights analysis from domestic and international perspectives. The report outlines several policy challenges related to the adoption of these technologies and presents a series of policy recommendations for the federal government.  
  • Mukherjee, S., et al. (2020). Immigration document classification and automated response generation. In 2020 International Conference on Data Mining Workshops (ICDMW) (pp. 782-789). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/ICDMW51313.2020.00114 
    • In this paper, Mukherjee and colleagues work to address the problem of repetitive manual information processing in American immigration applications. They apply several image and text classifier algorithms to automatically categorize application supporting documents and evidence, while ensuring a robust human review process. They argue that their method can significantly reduce application processing time without major sacrifices in accuracy.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
    • Noble’s book challenges the notion that search engines like Google are value-neutral. Noble reveals how the combination of private interests in promoting certain sites and monopoly of Internet search engines leads to a biased set of engines that are embedded with “data discrimination” in ways that privilege whiteness while marginalizing people of color. 
  • Raymond, et al. (2016). Building data responsibility into humanitarian action. OCHA Policy and Studies Series. https://ssrn.com/abstract=3141479
    • This paper explores the risks and challenges for collecting, analyzing, aggregating, sharing, and using data for humanitarian projects including handling sensitive data and bias and discrimination. By drawing on case studies of data-driven initiatives across the globe, the authors identify the critical issues humanitarians face as they use data in operations, and propose an initial framework for data responsibility. 
  • Staton, B. (2016). Eye spy: Biometric aid system trials in Jordan. The New Humanitarian. https://www.thenewhumanitarian.org/analysis/2016/05/18/eye-spy-biometric-aid-system-trials-jordan
    • Staton’s article explores the use of biometric iris scanners in Syrian refugee camps in Azraq, Jordan. Through interviews with the technology’s developers, users, and advocacy groups, Staton outlines the proposed practical and security benefits of the technology as well as refugees’ concerns surrounding privacy, possibility of abuses and data error, and effects on health and wellbeing. Staton’s article acknowledges the rapidly growing adoption of technology in humanitarian aid and places biometric iris scanning technology in broader debates surrounding responsible data use and protecting vulnerable populations from potential harm. 
  • Tingzon, I., et al. (2020). Mapping new informal settlements using machine learning and time series satellite images: An application in the Venezuelan migration crisis. In 2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G), (pp. 198–203). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/AI4G50087.2020.9311041
    • This conference paper presents a machine learning method for monitoring migration patterns to assist state and non-governmental humanitarian efforts. Using the case of out-migration from Venezuela into Colombia, the authors demonstrate that they can partially automate detection of informal settlements with a random forest classifier and time-series satellite imagery and verify predictions with Google Earth and a mobile crowd-sourcing app. They argue that this method can help efficiently deploy resources to populations in need.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.
    • Zuboff’s book explains and details how the phenomenon of surveillance capitalism threatens to modify human behavior for profit by producing new forms of economic oppression where wealth and power are accumulated in behavioral futures markets and behavioral predictions are bought and sold. Zuboff stresses that the ubiquity of digital architecture creates a controlled hive of total connection that promises certainty and economic gain at the expense of democracy and freedom. 

Chapter 42. Robot Teaching, Pedagogy, And Policy (Elana Zeide)⬆︎

  • Bradbury, A., & Roberts-Holmes, G. (2017). The datafication of primary and early years education: Playing with numbers. Routledge.
    • This book analyzes the trend of increased data use in schools, particularly within early childhood education to explore the impact of its use in ‘data-obsessed’ schools. Using case studies and both sociological and post-foundational frameworks, the authors argue that new teacher and student subjectivities are created while reducing the complexity of children’s learning. 
  • Bradbury, A. (2019). Datafied at four: The role of data in the ‘schoolification’ of early childhood education in England. Learning, Media and Technology, 44(1), 7-21. https://doi.org/10.1080/17439884.2018.1511577
    • This article examines the impact of datafication on children from birth to age five in England, arguing that nurseries and schools are subjected to demands from data, creating new subjectivities which has led to the prioritization of measurement over learning.
  • Edwards, R. (2015).* Software and the hidden curriculum in digital education. Pedagogy, Culture & Society, 23(2), 265–79. https://doi.org/10.1080/14681366.2014.977809
    • This article challenges the positioning of emerging technologies as mere tools to enhance teaching and learning, by highlighting the ways in which these technologies shape curriculum and limit modes of interaction between teachers and students.
  • Fenwick, T., & Edwards, R. (2016). Exploring the impact of digital technologies on professional responsibilities and education. European Educational Research Journal, 15(1), 117-131. https://doi.org/10.1177%2F1474904115608387
    • This article examines how new digital technologies are impacting the relationship between professionals and their clients, users, and students. As a result, new forms of accountability and responsibility have emerged.
  • Gourlay, L. (2021). There is no ‘virtual learning’: The materiality of digital education. Journal of New Approaches in Educational Research, 9(2), 57-66. https://doi.org/10.7821/naer.2021.1.649 
    • Adopting a sociomaterial perspective, Gourlay argues that the notion of ‘virtual learning’ is inherently flawed, despite many educators combining the concept of face-to-face instruction to create what is known as ‘blended learning.’ The author contends that virtual learning is more complicated than many acknowledge, and it is actually grounded in materiality. 
  • Gulson, K. N., & Sellar, S. (2019). Emerging data infrastructures and the new topologies of education policy. Environment and Planning D: Society and Space, 37(2), 350-366. https://doi.org/10.1177%2F0263775818813144
    • This article argues that datafication in educational policy is creating new topologies. The authors outline a case study of an emergent data infrastructure in Australian schooling called the National Schools Interoperability Program. The study is used to provide empirical evidence of the movement, connection, and enactment of digital data across policy spaces, including the ways that data infrastructure is: (i) enabling new private and public connections across policy topologies; (ii) creating a new role for technical standards in education policy; and (iii) changing the topological spaces of education governance.
  • Hartong, S., & Förschler, A. (2019). Opening the black box of data-based school monitoring: Data infrastructures, flows and practices in state education agencies. Big Data & Society, 6(1). https://doi.org/10.1177%2F2053951719853311
    • This article examines digital data infrastructures in state education agencies, considering the role of school monitoring. They argue that the rise of digital technologies creates new capabilities and powers and suggest that teachers should be given more information on these tools.
  • Herold, B. & Molnar, M. (2018, November 6).* Are companies overselling personalized learning? Education Week. https://www.edweek.org/technology/are-companies-overselling-personalized-learning/2018/11
    • This article critiques the use of the term “personalized learning” as it has no set definition and can refer to a variety of pedagogical strategies. Rather, the term has been used as a marketing tool for companies looking to sell their products to educators.
  • Herold, B. (2018, November 7).* What does personalized learning mean? Whatever people want it to. Education Week. https://www.edweek.org/ew/articles/2018/11/07/what-does-personalized-learning-mean-whatever-people.html
    • This article critiques the variety of definitions applied to the term personalized learning, arguing that loose definitions can result in incoherent policy and ineffective educational outcomes.
  • Hood, N. (2018). Re-imagining the nature of (student-focused) learning through digital technology. Policy Futures in Education, 16(3), 321-326.
    • This paper explores some of the questions about the role of AI in education and learning. In particular, the article examines issues of equity and social justice, what it means to design educational and learning experiences that are truly student-focused, and the potential for technology to dehumanize the learning process.
  • Hossain, S. F., et al. (2021). Exploring the role of AI in K12: Are robot teachers taking over? In Jaafar, I., & Pedersen, J. M. (Ed.), Emerging Realities and the Future of Technology in the Classroom (pp. 120-135). IGI Global. https://www.irma-international.org/chapter/exploring-the-role-of-ai-in-k12/275651/
    • This chapter summarizes a focus group interview that was conducted to study the role of artificial intelligence (AI) in K-12 education systems. The findings of the focus group uncovered how traditional learning methods have transformed due to factors like the COVID-19 pandemic, and the role of AI in these systems receiving more scholarly attention than ever before. The authors also draw attention to the use of AI-enhanced teaching and how it ensures sustainable educational development. 
  • Jones, K., et al. (2021). Do they even care? Measuring instructor value of student privacy in the context of learning analytics. 54th Hawaii International Conference on System Sciences. https://doi.org/10.24251/hicss.2021.185
    • This presentation examines the increasingly large role that learning analytics tools play in educational systems. The authors argue that despite faculty, staff, and students being concerned about privacy in their personal lives, it is unclear whether or not these groups prioritize their privacy in the educational setting. 
  • Landri, P. (2018). Digital governance of education: Technology, standards and Europeanization of education. Bloomsbury Publishing.
    • Adopting a sociomaterial approach to education policy, this book explores how datafication impacts the experience of education. Landri argues that this datafication has drastic effects on how education systems are organized and managed, including the standardization of education and transparency in educational practices.
  • Lindh, M., & Nolin, J. (2016). Information we collect: Surveillance and privacy in the implementation of Google apps for education. European Educational Research Journal, 15(6), 644-663. https://doi.org/10.1177%2F1474904116654917
    • This study conducted in a Swedish school organization argues that Google’s business model for online marketing is embedded in its educational tools, Google Apps for Education (GAFE). By making a distinction between (your) ‘data’ and (collected) ‘information’ Google can disguise the presence of its business model.
  • McStay, A. (2019). Emotional AI and EdTech: Serving the public good? Learning, Media and Technology, 45(3), 270–283. https://doi.org/10.1080/17439884.2020.1686016 
    • This article examines the role of education technology companies employing AI to quantify emotional learning in classrooms. The author argues that these forms of technology raise important concerns about the methodology and material effects on students, and the ethical and legal risks their deployment in education. 
  • Murphy, R. F. (2019).* Artificial intelligence applications to support K-12 teacher and teaching: A review of promising applications, challenges, and risks. RAND Corporation. https://www.rand.org/pubs/perspectives/PE315.html
    • This article explores how AI can be used to support K-12 teachers by assisting them with tasks rather than outright replacing them. Examined systems include intelligent tutoring, automated essay grading, and early warning protocols. Technical challenges of these systems are also discussed.
  • Office of Education Technology, U.S. Department of Education. (2017, January 18).* What is personalized learning? Personalizing the learning experience: insights from future ready schools. Medium. https://medium.com/personalizing-the-learning-experience-insights/what-is-personalized-learning-bc874799b6f
    • This article presents the argument that the lack of a detailed definition for the term “personalized learning” has created problems for understanding the concept, and for implementing personalized learning curriculum, which is defined as the adjustment of the pace of learning to meet the needs of individual students. 
  • Pearson & EdSurge. (2016).* Decoding adaptive. http://d3btwko586hcvj.cloudfront.net/static_assets/PearsonDecodingAdaptiveWeb.pdf
    • This report investigates three questions. First, what is adaptive learning? Second, what is inside the “black box” of adaptive learning?” Third, how do adaptive learning tools on the market differ? It is vital that these questions are answered if these technologies are to improve teaching and learning in significant ways.
  • Pedro, F., et al. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development. UNESCO. https://www.gcedclearinghouse.org/sites/default/files/resources/190175eng.pdf
    • This report draws attention to the use of AI technology in developing countries and offers suggestions for how AI can be utilized to improve education policy. The authors note that policymakers also suggest six important recommendations for policymakers implementing AI within educational systems, including the development of fair and equal policy, ensuring equity, training teachers, developing inclusive data systems, studying the impacts of AI in education and increasing transparency in data collection.  
  • Popenici, S. A., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1). https://doi.org/10.1186/s41039-017-0062-8
    • This paper explores the emergence of the use of artificial intelligence in teaching and learning within the higher education system. The authors examine the implications of these technologies as they continue to evolve, while also pointing out challenges’ institutions may face when adopting them on teaching, learning, student support, and administration. 
  • Regan, P. M., & Jesse, J. (2018). Ethical challenges of EdTech, big data and personalized learning: Twenty-first century student sorting and tracking. Ethics and Information Technology, 21(3), 167–179. https://doi.org/10.1007/s10676-018-9492-2
    • This paper analyzes ethical concerns surrounding the use of education technology, in particular, AI designed to create personalized learning profiles. The authors argue that characterizing these concerns under the general rubric of ‘privacy’ oversimplifies the issue and makes it too easy for advocates to dismiss or minimize them. Instead, the authors identify six additional ethical concerns: information privacy, anonymity, surveillance, autonomy, non-discrimination, and ownership of information.
  • Selwyn, N. (2016).* Is technology good for education? John Wiley & Sons.
    • This book challenges the notion that rapid digitalization of education is net positive, arguing that we should question who stands to gain from this digitalization and what is lost when educators convert to these methods.
  • Wang, F. L., et al. (2010). Handbook of research on hybrid learning models: Advanced tools, technologies, and applications. Information Science Reference. 
    • This book, through the lens of numerous contributors, examines various hybrid learning models that are used in educational systems today. The central argument of this book is that face-to-face instruction is the most efficient way of teaching, and that technology should never be the sole factor driving educational systems. 
  • Watters, A. (2017, June 9).* The histories of personalized e-learning. Hackeducation. http://hackeducation.com/2017/06/09/personalization
    • This article asserts that emerging technology in education is not an entirely new phenomenon by providing the history of personalized learning that spans over decades.
  • Williamson, B. (2018).* The hidden architecture of higher education: Building a big data infrastructure for the ‘smarter university.’ International Journal of Educational Technology in Higher Education, 15(1). https://doi.org/10.1186/s41239-018-0094-1
    • This article examines a major data infrastructure project in Higher Education within the United Kingdom, observing how the program imagines the ideal of the ‘smart university’, while also leading to reforms through the process of marketization.
  • Williamson, B. (2016). Digital education governance: Data visualization, predictive analytics, and ‘real-time’ policy instruments. Journal of Education Policy, 31(2), 123-141. https://doi.org/10.1080/02680939.2015.1035758
    • This article maps new kinds of digital policy instruments in education. It provides two case studies on new digital data systems: The Learning Curve from Pearson Education, and learning analysis platforms that track student performance using their digital date to predict outcomes. The author finds that third-party companies have a domineering effect and has led to a data-driven style of governing within education. 
  • Williamson, B. (2016). Digital education governance: An introduction. Sage Journals, 15(1), 3-13. https://doi.org/10.1177%2F1474904115616630
    • This article seeks to explain how digital technology has changed numerous trends within educational policy. This includes the phenomena of governing through data, the globalization of educational policy, accountability, global comparison, and benchmarking within the framework of emerging local, national, and international goals.
  • Wilson, A., et al. (2017). Learning analytics: Challenges and limitations. Teaching in Higher Education, 22(8), 991-1007.https://doi.org/10.1080/13562517.2017.1332026
    • This article raises concerns about the increased use of learning analytics in higher education for adults, laying out potential problems. The authors posit their own analytic framework that is based in sociometrical pedagogy.
  • Zawacki-Richter, O., et al. (2019). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education, 16(1). https://doi.org/10.1186/s41239-019-0171-0
    • This article highlights the lack of critical examination from scholars on the impact of AI on higher education. The authors argue that most papers on AIEd come mainly from Computer Science and STEM fields, leaving a gap in the exploration of this issue from ethical and educational perspectives. The article presents four areas of AIEd applications in academic support, institutional, and administrative services: (1) profiling and prediction, (2) assessment and evaluation, (3) adaptive systems and personalization, and (4) intelligent tutoring systems.
  • Zeide, E. (2017).* The structural consequences of big data-driven education. Big Data, 5(2), 164–72. https://doi.org/10.1089/big.2016.0061
    • This article examines how data-driven tools change how schools make pedagogical decisions, fundamentally changing aspects of the education enterprise in the United States.

Chapter 43. Algorithms and the Social Organization of Work (Ifeoma Ajunwa and Rachel Schlund)⬆︎

  • AI Now Institute. (2018). Algorithmic Accountability Policy Toolkit. https://ainowinstitute.org/aap-toolkit.pdf
    • This policy toolkit was created by the AI Now Institute to disseminate information on the use of algorithms by governments. It presents general information about what algorithms are, how they are created, and how they work. It also includes resources for advocates, literature reviews on relevant topics, and examples of areas where AI systems have been implemented.
  • Ajunwa, I., et al. (2016). Health and big data: An ethical framework for health information collection by corporate wellness programs. The Journal of Law, Medicine & Ethics44(3), 474-480. https://doi.org/10.1177%2F1073110516667943 
    • This essay discusses the manner in which data collection is “being utilized in wellness programs and the potential negative impact on the worker in regards to privacy and employment discrimination.” It is argued that ethical issues can be addressed “by committing to the well-settled ethical principles of informed consent, accountability, and fair use of personal health information data.” Furthermore, innovative approaches to wellness are offered that might allow for healthcare cost reduction.
  • Ajunwa, I. (2018).* Algorithms at work: Productivity monitoring applications and wearable technology as the new data-centric research agenda for employment and labor law. Saint Louis University Law Journal, 63(1), 21-54.
    • This article argues that the emergence of productivity monitoring applications and wearable technologies will lead to new legal issues for employment and labor law. These issues include concerns over privacy, unlawful employment discrimination, worker safety, and workers’ compensation. It is argued that the emergence of productivity monitoring applications will result in a conflict between the employer’s pecuniary interests and the privacy interests of the employees. The article ends by discussing future research for privacy law scholars in dealing with employee privacy and the collection and use of employee data.
  • Ajunwa, I. (2019).* Age discrimination by platforms. Berkeley Journal of Employment and Labor Law, 40(1), 1-28.
    • This article examines the manner in which platforms in the workplace might enable, facilitate, or contribute to age discrimination in employment. It discusses the legal difficulties in dealing with such practices, namely, meeting the burden of proof and assigning liability in cases where the platform acts as an intermediary. The article proceeds by offering a three-part proposal to combat the age discrimination that accompanies platform authoritarianism. 
  • Ajunwa, I. (2020).* The paradox of automation as anti-bias intervention. Cardozo Law Review, 41(5), 1671-1742.
    • This article rejects the mistaken understanding of algorithmic bias as a technical issue. Instead, it is argued that the introduction of bias in the hiring process derives largely in part from an American legal tradition of deference to employers. The article discusses novel approaches that might be used to make employers and designers of algorithmic hiring systems liable for employment discrimination. In particular, the doctrine of discrimination per se is offered, which interprets an employer’s failure to audit and correct automated hiring platforms for disparate impact as prima facie evidence of discriminatory intent.
  • Ajunwa, I., & Greene, D. (2019).* Platforms at work: Automated hiring platforms and other new intermediaries in the organization of work. Research in the Sociology of Work, 33(1), 61-91.
    • This chapter discusses the manner in which tools provided by the sociology of work might be used to study work platforms, such as automated hiring platforms. The authors highlight five core affordances that work platforms offer employers and discuss how they combine to create a managerial frame in which workers are viewed as fungible human capital. Focus is given to the coercive nature of work platforms and the asymmetrical flow of information that favors the interests of employers.
  • Boulding, W., et al. (2005).* A customer relationship management roadmap: What is known, potential pitfalls, and where to go. Journal of Marketing69(4), 155-166. https://doi.org/10.1509%2Fjmkg.2005.69.4.155
    • This article asserts that customer relationship management (CRM) is the result of the “continuing evolution and integration of marketing ideas and newly available data, technologies, and organizational forms…” It is predicted that CRM will continue to evolve as new ideas and technologies are incorporated into CRM activities. The article discusses what is known about CRM, the potential pitfalls and unknowns faced by its implementation, and offers recommendations for further research.
  • Brown, E. A. (2016). The fitbit fault line: Two proposals to protect health and fitness data at work. Yale Journal of Health Policy, Law and Ethics, 16(1), 1-50.
    • This article argues that federal law does not adequately protect employees’ health and fitness data from potential misuse; moreover, employers are incentivized to use such data when making significant decisions, such as hiring and promotions. The article offers two remedies for the improper use of health and fitness data. First, the enactment and enforcement by the Federal Trade Commission of a mandatory privacy labelling law for health-related devices and apps would improve employee control over their health data. Second, the Health Insurance Portability and Accountability Act of 1996 can extend its protections to the health-related data that employers may acquire about their employees. 
  • Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512
    • In this research article, the author considers the opacity of machine learning algorithms as a problem for consequential mechanisms of classification and ranking, e.g., spam filters and search engines. The author identifies three types of opacity: opacity resulting from intentional corporate or state secrecy, technical illiteracy, or the characteristics of machine learning algorithms. The paper concludes by arguing that identifying these types of opacity is necessary for effective technical and non-technical solutions to be introduced. 
  • Chen, L., et al. (2018). Investigating the impact of gender on rank in resume search engines. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-14). Association for Computing Machinery. https://doi.org/10.1145/3173574.3174225
    • This work examines gender-based inequalities in the context of resume search engines, understood as “tools that allow recruiters to proactively search for candidates based on keywords and filters.” It focuses on the ranking algorithms used by three major hiring websites, namely, Indeed, Monster, and CareerBuilder. The examination concludes that “the ranking algorithms used by all three hiring sites do not use candidates’ inferred gender as a feature,” but there was “significant and consistent group unfairness against feminine candidates in roughly 1/3 of the job titles” examined.
  • Chung, C. F., et al. (2017). Finding the right fit: Understanding health tracking in workplace wellness programs. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 4875-4886). Association for Computing Machinery. https://doi.org/10.1145/3025453.3025510
    • This paper uses empirical data to gain an understanding of “employee experiences and attitudes towards health tracking in workplace health and wellness programs.” It is found that employees are concerned predominantly with program fit rather than privacy. The paper also highlights a gap between a holistic understanding of health and the easily measurable features with which workplace programs are concerned. 
  • Citron, D., Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1-34.
    • Predictive algorithms use data to rank and rate individuals. This article argues that overseeing such systems should be a critical aim of the legal system. Certain protections need to be implemented, such as allowing regulators to test scoring systems to ensure fairness and accuracy and providing individuals an opportunity to challenge decisions based on scores that mischaracterize them. It is argued that absent such protections, the adoption of predictive algorithms risks producing stigmatizing scores on the basis of biased data.
  • Danna, A., & Gandy, O. H. (2002). All that glitters is not gold: Digging beneath the surface of data mining. Journal of Business Ethics40(4), 373-386. https://doi.org/10.1023/A:1020845814009 
    • This article examines the manner in which data mining technologies are applied in the market and the social concerns that arise in response to the application of such technologies in the public and private sectors. It is argued that, “at the very least, consumers should be informed of the ways in which information about them will be used to determine the opportunities, prices, and levels of service they can expect to enjoy in their future relations with a firm.” The Kantian principle of “universal acceptability” and the Rawlsian principles of special regard for those who are least advantaged are offered to guide the development of data mining and consumer profiles.
  • De Stefano, V. (2020). Algorithmic bosses and how to tame them. C4eJournal: Perspectives on Ethics, The Future of Work in the Age of Automation and AI Symposium. [2020 C4eJ 52] [20 eAIj 12]
    • In this article, De Stefano traces the history of management in the workplace, from Taylorism to the arrival of algorithmic management. The author then surveys recent developments in the regulation of algorithmic management. The author argues that the arrival of algorithmic management reveals that the current development of ethical principles for AI has not been appropriately focused on issues related to work and employment. The author suggests turning to current human rights frameworks, which already focus on the rights of  workers, to inform the development of ethical principles and AI technologies.
  • Fort, T. L., et al. (2016). The Angel on your shoulder: Prompting employees to do the right thing through the use of wearables. Northwestern Journal of Technology and Intellectual Property14(2), 139-170.
    • This article examines the use of wearables as personal information gathering devices that feed into larger data sets. It is argued that cybersecurity and privacy guidelines, such as those offered by the European Data Protection Supervisor and the 2014 National Institute of Standards and Technology Cybersecurity Framework, should be implemented from the bottom-up in order to regulate the use of personal data.
  • Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society. MIT Press.
    • In this book chapter, the author outlines six dimensions to the public relevance of algorithms. The author describes how algorithms can include and exclude data, predict users’ behaviour, determine the relevance of information, (falsely) assume neutrality, and shape behaviour of users and the public’s sense of itself. The author concludes that because algorithms are socially constructed and institutionally managed, they are far from being purely mathematical and objective.
  • Greenbaum, J. M. (2004).* Windows on the workplace: Technology, jobs and the organization of office work (2nd ed.). Monthly Review Press.
    • This book discusses the changes that occurred from the 1950’s to the present in management policies, work organization, and the design of office information systems. Focusing on the experiences of office workers, the book highlights the manner in which technologies have been used by employers to increase profits and gain control over workers. 
  • Greenbaum, D. (2016). Ethical, legal and social concerns relating to exoskeletons. ACM SIGCAS Computers and Society45(3), 234-239.
    • This paper provides an overview of the issues surrounding the emergence of exoskeletons. The paper aims to “provide anticipatory expert opinion that can provide regulatory and legal support for this technology, and perhaps even course-correction if necessary, before the technology becomes ingrained in society.” 
  • Hull, G., & Pasquale, F. (2018). Toward a critical theory of corporate wellness. BioSocieties13(1), 190-212. https://doi.org/10.1057/s41292-017-0064-1 
    • Employee wellness programs aim to incentivize and supervise healthy employee behaviors; however, there is little evidence that such programs increase productivity or profit. This article analyzes employee wellness programs as “providing an opportunity for employers to exercise increasing control over their employees.” The article concludes by arguing that a renewed commitment to public health programs occluded by the private sector’s focus on wellness programs would constitute a better investment of resources.
  • Jahanbakhsh, F., et al. (2020). An experimental study of bias in platform worker ratings: The role of performance quality and gender. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376860
    • This paper presents the results of a study on the use of performance ratings in online labour platforms. The authors use variables such as gender (for both the worker and the rater) to compare how workers are rated by different users, as well as how workers are rated in comparison to each other. The authors found that low-performing female workers were rated lower than their male counterparts, and that high performing workers of all genders received significantly higher ratings than low-performing ones. 
  • Kellogg, K. C., et al. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410. https://doi.org/10.5465/annals.2018.0174
    • The authors of this research article propose a “6 Rs” framework for studying mechanisms of control in the workplace. These Rs are restricting and recommending to direct workers, recording and rating to evaluate them, and replacing and rewarding to discipline them. The paper also provides a literature review of labour process theory, algorithmic capabilities, the impacts of algorithms in the workplace, and examples of worker resistance.
  • Kim, P., & Scott, S. (2019). Discrimination in online employment recruiting. St. Louis University Law Journal63(1), 93-118.
    • This article examines the question of when employers should be liable for discrimination based on their online recruiting strategies. It discusses the extent to which existing law can address concerns over discriminatory advertising, and it notes the often-overlooked provisions forbidding discriminatory advertising practices found in Title VII of the Civil Rights Act of 1964 and the Age Discrimination in Employment Act. The article concludes that existing doctrine is suited to address highly problematic advertising practices; however, it remains uncertain the extent to which current law can address all practices with discriminatory effects.
  • Nissenbaum, H., & Patterson, H. (2016).* Biosensing in context: Health privacy in a connected world. In D. Nafus (Ed.), Quantified: Biosensing technologies in everyday life (pp. 79-100). MIT Press.
    • The emergence of novel information flows that accompany new health self-tracking practices create vulnerabilities for individual users and society. This chapter argues that such vulnerabilities implicate privacy. Consequently, the authors contend that information flows that accompany new health self-tracking practices “are best evaluated according to the ends, purposes, and values of the contexts in which they are embedded.”
  • Pasquale, F. (2015).* The black box society: The secret algorithms that control money and information. Harvard University Press.
    • This book discusses the manner in which corporations use large swaths of data to pursue profits. The use of such data is surrounded by secrecy, making it difficult to discern whether or not the interests of individuals are being protected. It is argued that the decisions made by firms using data should be fair, non-discriminatory, and open to criticism. This requires eliminating the secrecy surrounding current practices and increasing the accountability of those using such data to make important decisions.  
  • Raghavan, M., et al. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 469-481). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372828
    • This work conducts an in-depth analysis of the bias-related practices of vendors of algorithmic pre-employment assessments by examining the vendors’ publicly available statements. It is found that it is important to consider technical systems together with the context surrounding their use and deployment. The work concludes by offering several policy recommendations intended to reduce the risk of bias in the systems under consideration.
  • Rosenblat, A., & Stark, L. (2016). Algorithmic labor and information asymmetries: A case study of Uber’s drivers. International Journal of Communication, 10(27), 3758–3784. https://doi.org/10.2139/ssrn.2686227
    • This paper presents findings from an eight-month ethnographic study on Uber drivers. The paper argues that the Uber service configuration places the company in a position of power, and the app and its algorithms are structured to control workers. The authors argue that these power differentials are made greater through the misclassification of workers as independent contractors.
  • Srnicek, N. (2017).* Platform capitalism. John Wiley & Sons.
    • This book critically examines the emergence of platform capitalism, which is understood as the emergence of platform-based businesses. The book offers an analysis of the growth of platform capitalism in the broader history of capitalism’s development. It highlights the manner in which a small number of platform-based businesses are transforming the contemporary economy and how such businesses will need to adapt in the future in order to ensure sustainability.
  • Shoshana, Z. (1988).* In the age of the smart machine: The future of work and power. Basic Books.
    • This book discusses the computerization of the workplace and the manner in which it affects the work experience of labor and management. One of the concepts the book introduces is that of Informating, which is understood as a process unique to information technology through which digitalization translates activities, objects, and events into information.
  • Williams, J. D., et al. (2019). Technological workforce and its impact on algorithmic justice in politics. Customer Needs and Solutions6(3), 84-91. https://doi.org/10.1007/s40547-019-00103-3 
    • This paper argues that diversifying the workforce in the tech industry and incorporating inter-disciplinary education, such as principles of ethical coding, can help remedy the negative consequences of algorithmic bias. Allowing the diverse perspectives of tech employees to influence the development of algorithms will result in systems that incorporate a broad range of world views, and such systems are less likely to overlook the experiences of those belonging to groups that have been historically underrepresented.
  • Wieringa, M. (2020). What to account for when accounting for algorithms? In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 1–18). Association of Computing Machinery. https://doi.org/10.1145/3351095.3372833
    • This article presents a quantitative analysis of 242 articles published between 2008 and 2018 on algorithmic accountability. The paper analyzes the literature according to Mark Boven’s five elements of accountability: (1) its arguments on the actor, (2) the forum, (3) the relationship between the two, (3) the content and criteria of the account, and finally (5) the consequences which may result from the account. The authors argue, based on the findings of their analysis, that accountability goes beyond the use, design, implementation, and consequences of algorithmic systems, and should be considered in the entirety of the socio-technical process.
  • Wood, A. J., et al. (2018). Good gig, bad big: Autonomy and algorithmic control in the global gig economy. Work, Employment and Society, 33(1), 56–57. https://doi.org/10.1177/0950017018785616
    • The authors of this paper present the results of a study of online freelancing platforms. They argue that workers’ agency is shaped by the platforms’ use of algorithmic control in remote work. While the use of these algorithms appears to offer workers more autonomy and flexibility, the authors point to other issues created by platform-work, including low pay and long hours of work.

Chapter 44. Smart City Ethics: How “Smart” Challenges Democratic Governance (Ellen P. Goodman)⬆︎

  • Ahvenniemi, H., et al. (2017). What are the differences between sustainable and smart cities? Cities, 60, 234–245. https://doi.org/10.1016/j.cities.2016.09.009
    • In this paper, the authors analyze 16 sets of city assessment frameworks. They find that smart city frameworks lack important environmental indicators and focus mainly on social and economic sustainability. Based on this observation, the authors argue for developing smart city frameworks to include environmental sustainability. The authors suggest replacing the term “smart cities” with “smart sustainable cities” to further highlight the importance of environmental sustainability.
  • Brauneis, R., & Goodman, E. P. (2018).* Algorithmic transparency for the smart city. Yale Journal of Law & Technology, 20, 103-176.
    • This article examines the limits of transparency around governmental deployment of big data analytics. The authors critique the opacity of governmental predictive algorithms, and analyze predictive algorithm programs in local and state governments to test how impenetrable resulting black boxes are and assess whether open records processes would enable citizens to discover the policy judgements embodied by algorithms. The authors propose a framework for sufficient algorithm transparency for governments and public agencies.
  • Brooks, B. A., & Schrubbe, A. (2016). The need for a digitally inclusive smart city governance framework. University of Missouri-Kansas City Law Review, 85(4), 943-952.
    • This article examines how smart cities in urban and rural areas effectively create and deploy open data platforms for citizens, and analyzes the considerations and differing governance mechanisms for rural cities compared to urban cities. The authors examine several cases of municipal smart technology adoption to explore policy options to distribute resources that address citizen needs in those areas. 
  • Cardullo, P., et al. (2018). Living labs and vacancy in the neoliberal city. Cities73, 44-50.
    • This paper evaluates the role of living labs (LL) – technologies that foster local digital innovation to “solve” local issues – in the context of smart cities. The authors outline various approaches to LL, and argue that LLs are actively used to bolster smart city discourse. 
  • Eckhoff, D., & Wagner, I. (2017). Privacy in the smart city—Applications, technologies, challenges, and solutions. IEEE Communications Surveys & Tutorials, 20(1), 489–516. https://doi.org/10.1109/COMST.2017.2748998
    • This paper attempts to systemize application areas, technologies, privacy types, and data sources to bring structure to the fuzzy concept of a “smart city.” The authors also review existing privacy-enhancing technologies and discuss promising directions for future research. The paper is meant to serve as a reference guide for the development of privacy-friendly smart cities.
  • Edwards, L. (2016). Privacy, security and data protection in smart cities: A critical EU law perspective. European Data Protection Law Review, 2(1), 28-58.
    • This paper argues that smart cities combine the three greatest threats to personal privacy: the Internet of Things, Big Data, and the Cloud. Edwards notes that current regulatory frameworks fail to effectively address these threats, and discusses how and if EU data protection laws control these possible threats to personal privacy. 
  • Evans, J., et al. (2019). Smart and sustainable cities? Pipedreams, practicalities and possibilities. Local Environment, 24, 557–564.
    • This paper is concerned with the potential of smart cities to enhance social well-being and reduce environmental impact. The authors argue that social equity and environmental sustainability are neither a priori absent nor de facto present in current smart city initiatives but must be deliberately included and maintained as smart cities materialize.
  • Goodspeed, R. (2015). Smart cities: Moving beyond urban cybernetics to tackle wicked problems. Cambridge Journal of Regions, Economy and Society, 8(1), 79-92.
    • This paper aims to describe institutions for municipal innovation and IT-enabled collaborative planning to address “wicked”, or inherently political, problems. The author proposes that smart cities, which use IT to pursue efficient systems through real-time monitoring and control, are equivalent to the idea of urban cybernetics debated in the 1970s. Drawing on Rio de Janeiro’s Operations Center, the author argues that wicked urban problems require solutions that involve local innovation and stakeholder participation.
  • Halpern, O., et al. (2013). Test-bed urbanism. Public Culture, 25(2), 272-306.
    • This essay by Halpern et al. interrogates how ubiquitous computing infrastructures produce new forms of experimentation with urban territory. These protocols of “test-bed urbanism” are new methods for spatial development that are changing the form, function, economy, and administration of urban life. 
  • Joss, S., et al. (2019). The smart city as global discourse: Storylines and critical junctures across 27 cities. Journal of Urban Technology, 26(1), 3–34.
    • This paper employs a systematic, webometric analysis of key texts associated with 5,553 cities worldwide to clarify and highlight the practical importance of smart cities. The authors find that the discourse about smart cities is centred around 27 predominately capital cities, and they argue that city “smartness” is closely linked to cities’ global presence and positioning. The authors conclude with a discussion of the resulting implications for research, policy, and practice.
  • Karvonen, A., et al. (Eds.). (2018).* Inside smart cities: Place, politics and urban innovation. Routledge.
    • This article explores the tensions within second-generation smart city experiments such as Barcelona. The article maps the shift from first-generation to second-generation policies developed by Barcelona’s liberal government and explores how concepts of technological sovereignty emerged. The authors reflect on the central tenants, potentialities, and limits of Barcelona’s Digital Plan and examine how the city’s new digital paradigm can address pressing urban challenges. 
  • Kitchin, R. (2014). The real-time city? Big data and smart urbanism. GeoJournal, 79(1), 1-14.
    • Kitchin’s article draws on various examples of pervasive and ubiquitous computing in smart cities to detail how urban spaces are being instrumented with Big Data-producing digital devices and infrastructure. While smart city advocates argue that Big Data can provide material for envisioning and enacting more efficient, sustainable, productive, and transparent cities, Kitchin aims to critically reflect on the implications of big data and smart urbanism by analyzing five emerging concerns: the politics of big urban data, technocratic governance and city development, corporatization of city governance, hackable cities, and the panoptic city. 
  • Kitchin, R., et al. (2018).* Citizenship, justice and the right to the smart city. In P. Cardullo, C. Di Feliciantonio and R. Kitchin (Eds.), The right to the smart city (pp. 1-24). Emerald Publishing Limited.
    • This article engages the smart city in various practical, political, and normative questions relating to citizenship, social justice, and the public good. The authors detail some troubling ethical issues associated with smart city technologies and examine how citizens have been conceived and operationalized in the smart city, proposing that the “right to the smart city” should be a fundamental principle of smart city endeavors. 
  • Kitchin, R., & Dodge, M. (2019).* The (in) security of smart cities: Vulnerabilities, risks, mitigation, and prevention. Journal of Urban Technology, 26(2), 47-65.
    • This article seeks to examine how smart city technologies that are designed to produce urban resilience and reduce risk paradoxically create new vulnerabilities in city infrastructure and threaten to open extended forms of criminal activity. Through identifying forms of smart city vulnerabilities and detailing several examples of urban cyberattacks, the authors analyze existing smart city risk mitigation strategies and propose a set of systemic interventions that extends beyond technical solutions. 
  • Marvin, S., et al. (Eds.). (2015). Smart urbanism: Utopian vision or false dawn? Routledge.
    • This book critically assesses “smart urbanism” – the rebuilding of cities through the integration of digital technologies with neighborhoods, infrastructures, and people as a unique panacea to contemporary urban challenges. The authors explore what new capabilities are created by smart urbanism, by whom, and with what exclusions, as well as the material and social consequences of technological development and application. The book aims to identify and convene researchers, commentators, software developers, and uses within and outside mainstream smart urbanism discourses to assess which urban problems can be addressed by smart technology. 
  • McFarlane, C., & Söderström, O. (2017).* On alternative smart cities: From a technology-intensive to a knowledge-intensive smart urbanism. City, 21(3-4), 312-328.
    • This article explores the influence of corporate-led urban development in the smart urbanism agenda. Drawing on critical urban scholarship and initiatives across the Global North and South, the author examines steps towards an alternative smart urbanism where urban priorities and justice drive the use or lack of use of technology. 
  • Morozov, E., & Bria, F. (2018). Rethinking the smart city. Rosa Luxemburg Stiftung.
    • This article provides a political-economic analysis of smart city development to critique the promises of cheap and effective smart city solutions to social and political problems. The authors propose that the smart city can only be understood within the context of neoliberalism as public city infrastructure and services are managed by private companies, thereby de-centralizing and de-personalizing the political sphere. In response, the authors offer alternative smart city models that rely on democratic data ownership regimes, grassroots innovation, and cooperative service provision models. 
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
    • This book aims to reveal how mathematical models used today are opaque, unregulated, uncontestable, and reinforce discrimination. The author reveals how black box models shape individual and collective futures and undermine democracy by exacerbating existing inequalities, and calls on engineers and policymakers more responsibly develop and regulate the use of algorithms. 
  • Shelton, T., et al. (2015).* The ‘actually existing smart city’. Cambridge Journal of Regions, Economy and Society, 8(1), 13-25.
    • This paper aims to ground critiques of the smart city in a historical and geographic context. The authors closely focus on smart city policies in Louisville and Philadelphia (examples of “actually existing” smart cities rather than exceptional, paradigmatic centers such as Songdo or Masdar) to analyze how these policies arose and their unequal impact on the urban landscape. The authors argue that an uncritical, ahistorical, and aspatial understanding of data presents a problematic approach data-driven governance and the smart city imaginary. 
  • Söderström, et al. (2014).* Smart cities as corporate storytelling. City, 18(3), 307-320.
    • This article examines corporate visibility and legitimacy in the smart city market. Drawing on actor-network theory and critical planning theory, this paper analyzes how IBM’s smarter city campaign tells a story aimed at making the company an obligatory passage point in the implementation of urban technologies and calls for the creation of alternative smart city stories.
  • Townsend, A. M. (2013). Smart cities: Big data, civic hackers, and the quest for a new utopia. WW Norton & Company.
    • This book explores the history of urban information technologies to trace how cities have used and continue to use evolving technology to address increasingly complex policy challenges. The author analyzes the mass interconnected networks of contemporary metropolitan centers, drawing from examples of smart technology applications in cities around the world to document and examine emerging techno-urban landscapes. The author illuminates the motivations, aspirations, and shortcomings of various smart city stakeholders including entrepreneurs, municipal government officials, and software developers and investigates how these actors shape the urban futures. 
  • Trencher, G. (2019). Towards the smart city 2.0: Empirical evidence of using smartness as a tool for tackling social challenges. Technological Forecasting and Social Change, 142, 117–128. https://doi.org/10.1016/j.techfore.2018.07.033
    • This paper compares the dominating, techno-economic and centralized approach of the “smart city 1.0” with the emergence of the so-called “smart city 2.0.” The smart city 2.0 is framed as a decentralized and people-centric approach, where smart technologies are employed as tools to tackle social problems. The paper examines Aizuwakamatsu Smart City in Fukushima, Japan, as a case study of the smart city 2.0.
  • Trencher, G., & Karvonen, A. (2019). Stretching “smart”: Advancing health and well-being through the smart city agenda. Local Environment, 24(7), 610–627.
    • This paper argues that contemporary smart cities focus primarily on stimulating economic activity and encouraging environmental protection, with less attention paid to social equity. The authors present a case study of Kashiwanoha Smart City in Japan, which they argue, has stretched smart city activities beyond technological innovation to include the pursuit of greater health and well-being. Based on this case study, the authors contend that smart cities can tackle social problems, creating more equitable and liveable cities.
  • Stübinger, J., & Schneider, L. (2020). Understanding smart city—A data-driven literature review. Sustainability, 12(20), 8460. https://doi.org/10.3390/su12208460
    • This paper systematically reviews the top 200 publications, according to Google Scholar, in the area of smart cities. Using methods from natural language processing (NLP) and time series forecasting, the authors identify the most relevant streams as smart infrastructure, smart economy & policy, smart technology, smart sustainability, and smart health. The authors provide a review of the literature in each stream, highlighting perceived strengths and weaknesses.
  • Vanolo, A. (2014).* Smartmentality: The smart city as disciplinary strategy. Urban Studies, 51(5), 883-898.
    • This article analyzes the power and knowledge implications of smart city policies that support new ways of imagining, organizing, and managing the city while impressing a new moral order to distinguish between the “good” and “bad” city. The author uses smart city politics in Italy as a case study to examine how smart city discourse has produced new visions of the “good city” and the role of private actors and citizens in urban management development. 
  • Wiig, A. (2018).* Secure the city, revitalize the zone: Smart urbanization in Camden, New Jersey. Environment and Planning C: Politics and Space, 36(3), 403-422.
    • This paper analyzes the impacts of smart city agendas aligning with neoliberal urban revitalization efforts by examining redevelopment efforts in Camden, New Jersey. The author analyzes how Camden’s citywide multi-instrument surveillance network contributed to policing strategies that controlled the circulation of residents and prioritized the flow of capital into spatially bounded zones. The author underscores the crucial role of this surveillance-driven policing strategy in shifting the narrative of Camden from disenfranchised to economically and politically viable. 
  • Yigitcanlar, T., & Kamruzzaman, M. (2018). Does smart city policy lead to sustainability of cities? Land Use Policy, 73, 49–58. https://doi.org/10.1016/j.landusepol.2018.01.034
    • This paper explores the connection between smart city policy and sustainability. Using data from 15 UK cities with differing “smartness” levels from 2005-2013, the authors find that the link between city smartness and carbon dioxide emissions is not linear. The authors call for increased scrutiny of existing smart cities and for smart city policy to better align itself with the goal of increased sustainability.

User’s Note⬆︎

  • An asterisk (*) after a reference indicates that it was included among the Further Readings listed at the end of the Handbook chapter by its author.
  • This annotated bibliography is the result of an ongoing collaboration among faculty and students affiliated with the Ethics of AI Lab, Centre for Ethics, University of Toronto. Contributors & editors include:
    • 2020-21: Contributors: George-Alexandru Adam, Jesse Bettencourt, Mary Danesh, Juliette Ferry-Danini, Laura Gallo, Kabba Gizaw, James Kenet Tjosvold Duncan, Anne-Marie Fowler, John Giorgi, Noam Kolt, Brenna Li, Lillio Mok, Julian Posada, Vinith Suriyakumar; Editors: Shannon Bardin, Amelia Eaton (lead editor), Holly Johnstone, David Kim, Sara Rasikh (special thanks to Jinx & Julius Dubber)
    • 2019-20: Tyler Biswurm, Stacy Chen, Amelia Eaton (lead editor), Stephanie Fielding, Suzanne van Geuns, Vinyas Harish, Chris Hill, Tobias Hobbins, Chris Longley, Liam McCoy, Nishila Mehta, Unnati Patel, Faye Shamshuddin, and Chelsea Tao (special thanks to Julius Dubber)

This image has an empty alt attribute; its file name is vid_20161211_111024521-animation.gif