The Oxford Handbook of Ethics of AI: An Annotated Bibliography

User’s Note | ➡︎ Supplement

I. Introduction & Overview

Chapter 1. The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation (Joanna J. Bryson)⬆︎

  • Barocas, S., & Selbst, A. D. (2016). Big Data’s disparate impact. California Law Review, 104(3), 671–732. https://heinonline.org/HOL/P?h=hein.journals/calr104&i=695
    • This article argues that algorithms are vulnerable to discriminate based on inherited human biases, and that American antidiscrimination law fails to recognize and protect against this kind of discrimination. The article discusses how technically, legally, and politically difficult this gap is to close and proposes that doing so will require reconsidering the fundamental legal definitions of discrimination and fairness.
  • Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., Newman, P., Pegman, G., Rodden, T., Sorell, T., Wallis, M., Whitby, B., Winfield, A., & Parry, V. (2010). Principles of robotics. Engineering and Physical Sciences Research Council (EPSRC). https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/*
    • This document proposes a set of five ethical rules to guide the designers, builders, and users of robots. The rules were formulated with the purpose of introducing robots in a manner that inspires public trust and confidence, maximizes potential benefits, and avoids unintended consequences. The document asserts that human designers and users—and not robots themselves—are the appropriate subjects of robotics regulation because robots are tools which are not ultimately responsible for their actions.
  • Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., hÉigeartaigh, S. Ó., Beard, S., Belfield, H., Farquhar, S., Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation (p. 99). University of Oxford, Future of Humanity Institute, University of Cambridge, Centre for the Study of Existential Risk, Center for a New American Security, Electronic Frontier Foundation, OpenAI. https://arxiv.org/abs/1802.07228*
    • This report surveys the landscape of potential digital, physical, and political security threats from malicious uses of AI and proposes ways to better forecast, prevent, and mitigate these threats. The report focuses on identifying the sorts of attacks that are likely to emerge if adequate defenses are not developed and recommends a broad spectrum of effective approaches to face them.
  • Bryson, J. J. (2019). The past decade and future of AI’s impact on society. Towards a new enlightenment?: A transcendent decade (pp. 127–159). Openmind BBVA/Turner. https://www.bbvaopenmind.com/en/articles/the-past-decade-and-future-of-ais-impact-on-society/*
    • This article reflects on the AI-induced social and economic changes that happened in the decade after smartphones were introduced in 2007, projects from this analysis to predict imminent political, economic, and personal challenges, and submits corresponding policy recommendations. The article argues that AI is less novel than is often assumed and that its familiar challenges can be managed with appropriate regulation.
  • Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291. https://doi.org/10.1007/s10506-017-9214-9*
    • This article argues that conferring legal personhood on synthetic entities is morally unnecessary and legally problematic. It highlights the adverse consequences of certain noteworthy precedents and concludes that while giving AI legal personhood may have emotional or economic appeal, the difficulties of holding rights-violating synthetic entities to account outweigh these dubious considerations.
  • Cadwalladr, C. (2018, March 18). ‘I made Steve Bannon’s psychological warfare tool’: Meet the data war whistleblower. The Guardian. https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump*
    • This news article presents a profile of Christopher Wylie, the former research director of Cambridge Analytica who blew the whistle on the company’s illicit data-collection practices and influence campaign in the 2016 US presidential election.
  • Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505-528. https://doi.org/10.1007/s11948-017-9901-7
    • This article provides a comparative assessment of reports issued by the White House, the European Parliament, and the UK House of Commons to outline their respective visions on how to prepare society for the widespread use of artificial intelligence. To follow up its critiques, the article proposes two supplementary measures: first, the creation of an international advisory council, and second, a commitment to ground visions of a “good AI society” on human dignity.
  • Claxton, G. (2015). Intelligence in the flesh: Why your mind needs your body much more than it thinks. Yale University Press.*
    • This book argues—based on work in neuroscience, psychology, and philosophy—that human intelligence emanates from the body instead of the mind. With reference to examples like the endocrinal system, the book asserts that the body performs intelligent computations that people either overlook or falsely attribute to the brain. The book contends that the mind’s undeserved esteem has led to perverse social outcomes like the preference for white-collar over blue-collar labor.
  • Cohen, J. E. (2013). What privacy is for. Harvard Law Review, 126(7), 1904–1933. JSTOR. www.jstor.org/stable/23415061*
    • This article argues that privacy—contrary to its image as an outdated, anti-progressive, and hence inessential ideal—is an essential precondition for people to be self-determining. The article asserts that competing imperatives like national security, efficiency, and entrepreneurship have been permitted to override privacy because it is perceived as an optional benefit to the inherent self-determining capacity of liberal agents. By contrast, the article asserts that the self is socially constructed, and that privacy is therefore an essential personal shield against the perverse influence tactics of commercial and government actors.
  • Dennett, D. C. (1978). Why you can’t make a computer that feels pain. Brainstorms: Philosophical essays on mind and psychology (1st ed., pp. 190–229). Bradford Books.*
    • This essay argues that the ordinary concept of pain is incoherent and thus that any candidate for a pain-capable robot would be rejected by human judges because its experience would contradict at least one of our various intuitions about pain. The essay accepts that it is possible in principle for a robot to experience pain, and for humans to accept that it does, if a better physiological theory of pain is developed.
  • Guihot, M., Matthew, A. F., & Suzor, N. P. (2017). Nudging robots: Innovative solutions to regulate artificial intelligence. Vanderbilt Journal of Entertainment & Technology Law, 20(2), 385–456. https://heinonline.org/HOL/P?h=hein.journals/vanep20&i=409
    • This article argues that public regulators can overcome the obstacles to their control of artificial intelligence (e.g. scarce public resources and the power of technology companies) and remedy the technology’s dangerous under-regulation by adopting a predictive two-step process: first, they would signal expectations to influence or “nudge” AI designers; and second, they would participate in and interact with relevant industries. These steps would permit regulators to gain expertise, competently assess risks, and develop appropriate regulatory priorities.
  • Gunkel, D. J. (2018). Robot rights. MIT Press.*
    • This book explores if, and to what extent, robots can and should have rights. The book evaluates, analyzes, and ultimately rejects four key positions on this question before offering an alternative way of conceptualizing the social situation of robots and the implications they have for existing moral and legal systems.
  • Hüttermann, M. (2012). DevOps for developers. Apress.*
    • This book presents a practical introduction to “DevOps”, which is a set of practices that aim to  streamline the software delivery process by fostering collaboration between software development and IT operations.
  • Katyal, S. K. (2019). Private accountability in the age of artificial intelligence. UCLA Law Review, 1, 54–141. https://heinonline.org/HOL/P?h=hein.journals/uclalr66&i=64
    • This article argues that artificial intelligence raises novel civil rights concerns whose resolution requires augmenting public regulation with private industry standards. The article contends that private industry standards, including codes of conduct, impact statements, and whistleblower protection, represent a new generation of accountability measures which have the potential to outperform ordinary regulation in civil rights enforcement.
  • Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633–705. https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3*
    • This article argues that foundational computer science techniques provide an optimal way of holding automated decision systems accountable, including in scenarios where outdated accountability mechanisms and legal standards fail to. According to the article, these techniques avoid the limitations of more popular proposals like source code and input transparency. The article suggests that using them may even improve the governance of decision-making in general.
  • List, C., & Pettit, P. (2011). Group agency: The possibility, design, and status of corporate agents. Oxford University Press.*
    • This book argues that group agents like companies, churches, and states are irreducible to the individual agents that constitute them, and that any legitimate approach to the social sciences, law, morality, and politics must take account of this fact. The book is grounded in ideas from social choice theory, economics, and philosophy.
  • Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089. https://doi.org/10.1098/rsta.2018.0089*
    • This article argues that the principles of democracy, rule of law, and human rights must be incorporated into AI by design and proposes a practical framework to guide this practice. According to the article, this practice is necessary to maintain the strength of constitutional democracy because (a) AI will eventually govern core functions of society and (b) the decoupling of technology from constitutional principles has already precipitated illegal and undemocratic behavior. The article considers which of AI’s challenges can be regulated by ethical norms and which demand the force of law.
  • OECD. (2019). Recommendations of the council on artificial intelligence, OECD/LEGAL/0449 (No. 0449; OECD Legal Instruments). Organisation for Economic Co-operation and Development. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449*
    • This document presents the first intergovernmental standard on artificial intelligence. It aims to foster innovation and trust in AI by promoting responsible stewardship as well as human rights and democratic values. The document presents policy recommendations which include the “OECD Principles on AI”: (1) inclusive growth, sustainable development, and well-being; (2) human-centered values and fairness; (3) transparency and explainability; (4) robustness, security, and safety; and (5) accountability.
  • O’Neill, O. (2002). A question of trust: The BBC Reith lectures 2002. Cambridge University Press.*
    • This series of lectures explores whether modern democratic society’s debilitating “crisis of trust” can be solved by making people and institutions more accountable. Among other subjects, the lectures investigate whether the complex systems behind customary approaches to accountability improve or actually damage trust.
  • O’Reilly, T. (2017). WTF: What’s the future and why it’s up to us (1st ed.). Harper Business.*
    • This book argues that humans, not machines, control the ultimate outcomes of technological progress. According to the book, current concerns about AI are misplaced because they focus on futuristic hypotheticals instead of the currently pressing—and crucially, familiar—problems that perverse market incentives drive tech companies to instigate. For instance, the book contemplates how markets incentivize corporations to use technology for cost-cutting efficiency instead of meaningful innovation.
  • Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5, 15. https://doi.org/10.3389/frobt.2018.00015*
    • This article argues that there are two necessary conditions to implement “meaningful human control” over an autonomous system: (1) a “tracking” condition, under which the system must be responsive to the moral reasoning of its human designers and deployers and to morally relevant facts in its environment; and (2) a “tracing” condition, under which the system’s actions must always be attributable to at least one of its human designers or operators. The article notes that the principle of meaningful human control (and human moral responsibility) has gained traction as a solution to the “responsibility gap” created by autonomous systems.
  • Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology, 29(2), 353–400. https://heinonline.org/HOL/P?h=hein.journals/hjlt29&i=365
    • This article argues that although artificial intelligence presents both conceptual and practical challenges to the legal system, it can be effectively regulated using a legal framework which imposes limited or strict tort liability on manufacturers and operators based on their passage (or not) of an AI certification process. The article explores the public risks that AI poses, the regulatory challenges it raises, the competencies of government institutions in managing those risks, and the possibility of regulating AI using differential tort liability.
  • Shanahan, M. (2015). The technological singularity. MIT Press.*
    • This book explores the idea and implications of the “singularity”: the hypothetical event in which humans will be overtaken by artificial intelligence or enhanced biological intelligence. The book imagines and interrogates a range of possible scenarios for the event, including the possibility of superintelligent machines which challenge the ordinary concepts of personhood, responsibility, rights, and identity.
  • Sipser, M. (2006). Introduction to the theory of computation (2nd ed.). Thomson Course Technology.*
    • This textbook provides a comprehensive and approachable introduction to topics in theoretical computational theory. It conveys the fundamental mathematical properties of computer hardware, software, and applications with a blend of practical, philosophical, and mathematical discussion.
  • Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005
    • This article argues that the European General Data Protection Regulation (GDPR) does not—contrary to popular interpretation—afford a “right to explanation” of automated decision-making, and that its regulatory force is therefore diminished. According to the article, the defect is attributable to the legislation’s imprecise language and lack of well-defined rights and safeguards. The article recommends a series of specific legislative steps to improve the GDPR’s adequacy in this area.
  • Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology 31(2), 841–888. https://heinonline.org/HOL/P?h=hein.journals/hjlt31&i=860
    • This article argues that many of the significant limitations of algorithmic interpretability and accountability can be overcome by pursuing explanations which help data subjects act on, instead of understand, automated decisions. The article proposes three aims for explanations which serve this purpose: (1) to convey the rationale of the decision, (2) to provide grounds to contest the decision, and (3) to suggest viable steps to achieving a more favorable future decision. The article asserts that counterfactuals are an ideal means of explaining automated decisions because they satisfy these aims.

Chapter 2. The Ethics of the Ethics of AI (Thomas M. Powers and Jean-Gabriel Ganascia)⬆︎

  • Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge University Press. https://doi.org/10.1017/CBO9780511978036
    • This edited volume presents essays which consider, among other subjects: why it is necessary to implement ethical capacities in autonomous machines, what is required to implement them, potential approaches to implementing them, as well as philosophical and practical challenges to the study of machine ethics.
  • Arkin, R. C. (2009). Governing lethal behavior in autonomous robots. Chapman & Hall/CRC Press.*
    • This book considers how to develop autonomous robots which use lethal force ethically. It examines the philosophical basis, motivation, and theory of ethical control systems in robots and presents design recommendations to implement them. The author provides examples of autonomous systems using lethal force ethically and contemplates the possibility of robots being more humane than humans on the battlefield.
  • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., & Rahwan, I. (2018). The Moral Machine experiment. Nature, 563(7729), 59–64. https://doi.org/10.1038/s41586-018-0637-6*
    • This article describes the results of deploying an online experimental platform, the Moral Machine, to generate a large global dataset aggregating real human responses to the moral dilemmas faced by autonomous vehicles. It presents findings on global and regional moral preferences, as well as findings on demographic and culture-dependent variations in moral preferences. The authors discuss how these findings can contribute to developing global, socially acceptable principles for machine ethics.
  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
    • This book argues that if machines surpass humans in general intelligence, then superintelligent machines could replace humans as the dominant lifeform on Earth. The book imagines various paths along which this event transpire and considers how humans could anticipate and manage the existential threat it poses.
  • Brey, P. A. E. (2012). Anticipatory ethics for emerging technologies. NanoEthics, 6(1), 1–13. https://doi.org/10.1007/s11569-012-0141-7*
    • This article presents an original approach, “anticipatory technology ethics” (ATE), to the ethics of emerging technology. The article evaluates alternative approaches and formulates ATE in their context. The article argues that uncertainty is a central obstacle to the ethical analysis of emerging technology, and therefore that forecasting- and prediction-oriented approaches are necessary to reach useful ethical conclusions about emerging technology.
  • Brundage, M. (2014). Limitations and risks of machine ethics. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 355–372. https://doi.org/10.1080/0952813X.2014.895108
    • This article argues that “machine ethics” has several inherent limitations in its capacity to guarantee ethical behavior from AI machines and therefore to promote positive social outcomes from their development and use. According to the article, machine ethics fails to guarantee ethical behavior because of the inherent nature of ethics, the computational limits of AI machines, and the complexity of the world. Moreover, the article contends that even if the technical challenges of machine ethics were solved, the concept would remain inadequate to obtain its intended social outcomes.
  • Cave, S., Nyrup, R., Vold, K., & Weller, A. (2019). Motivations and risks of machine ethics. Proceedings of the IEEE, 107(3), 562–574. https://doi.org/10.1109/JPROC.2018.2865996
    • This article surveys reasons for and against pursuing machine ethics. The article clarifies some of the philosophical issues surrounding the field and its goals, explores why it is worth pursuing, and surveys the risks involved in pursuing machine ethics.
  • Dehaene, S., Lau, H., & Kouider, S. (2017). What is consciousness, and could machines have it? Science, 358(6362), 486–492. https://doi.org/10.1126/science.aan8871*
    • This article argues that despite recent advances in artificial intelligence, current machines predominantly perform computations that reflect basic unconscious processing (“C0”) in the human brain. The article contends that the standard for synthetic consciousness must be the human brain, and that since machines don’t perform computations which are comparable to conscious human processing (“C1” and “C2”), they can’t be called conscious.
  • Dennett, D. C. (1987). The intentional stance. MIT Press.*
    • This book argues that entities understand and anticipate one another’s behavior by adopting a predictive strategy of interpretation—the “intentional stance”—that treats the entity under examination as if it were a rational agent which makes choices based on its beliefs and desires. According to this argument, entities which adopt the intentional stance reason deductively from hypotheses about their subject’s beliefs and desires to conclude what they ought to decide in a given situation and therefrom predict what they will actually do in that situation.
  • Etzioni, A., & Etzioni, O. (2017). Incorporating ethics into artificial intelligence. The Journal of Ethics, 21(4), 403–418. https://doi.org/10.1007/s10892-017-9252-2*
    • This article argues that it is unnecessary to confer moral autonomy on artificially intelligent machines because we can readily guarantee ethical behavior from them by programming them with the existing instructions of law and their owners’ individual moral preferences. According to the article, many of the moral decisions facing AI machines are not discretionary and therefore easily automated because they are dictated by law. In cases where the decisions are discretionary, the article proposes that AI machines “read” and adhere to their owner’s moral preferences.
  • Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2019.258
    • This conference paper argues that vision statements for ethical AI co-opt the language of critiques to defuse them and promote a limited, technologically deterministic, and expert-driven view of what ethical AI means and how it should work. The paper identifies the grounding assumptions and terms of debate that open the door for some approaches to ethical design while suppressing others.
  • Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. MIT Press.
    • This book investigates whether and to what extent autonomous and artificially intelligent machines could be recognized as moral agents or moral patients, and so have moral responsibility or deserve moral consideration. The book also discusses new ideas in moral philosophy and critical theory that challenge the agent-patient framework.
  • Horty, J. F. (2001). Agency and deontic logic. Oxford University Press.*
    • This book develops deontic logic—that is, the logic of ethical concepts like obligation and permission—against the background of a formal theory of agency. It rejects the common assumption that what an agent ought to do is the same as what it ought to be that the agent does. By drawing on elements of decision theory, the book presents an alternative and novel account of what agents and groups of agents ought to do under various conditions and over extended periods of time.
  • Kurzweil, R. (2006). The singularity is near: When humans transcend biology. Penguin Books.*
    • This book envisions an event, the “singularity”, in which humans merge with machines and portrays what life might be be like afterwards. It speculates that, by overcoming biological limitations, the combination of human and machine abilities will solve exigent problems like the inevitability of death, environmental degradation, and world hunger. The book goes further to consider the broader social and philosophical consequences of this paradigm shift.
  • Lin, P., Abney, K., & Jenkins, R. (Eds.). (2017). Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.*
    • This edited volume presents a global and interdisciplinary collection of essays that focuses on emerging issues in the field of “robot ethics”, which is an interdisciplinary research effort that studies the effects of robotics on ethics, law, and policy. Besides academic audiences, the work is also aimed at policymakers and the broader public.
  • Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4
    • This article argues that the principled approach upon which AI ethics has converged is unlikely to succeed like its close analogue in medical ethics because, compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. The article cautions against validating any newly emerging consensus around principles of AI ethics.
  • Powers, T. M. (2006). Prospects for a Kantian machine. IEEE Intelligent Systems, 21(4), 46–51. https://doi.org/10.1109/MIS.2006.77*
    • This article discusses the potential of creating ethical machines based on rule-based ethical theories like Kantian ethics with a focus on the challenges that this approach poses. According to the article, many view rule-based ethical theories as promising for machine ethics because their judgements exhibit a computational structure that might permit their computerization. The article explores and evaluates different approaches via which a rule-based ethical theory could be used as the basis for an ethical machine.
  • Tonkens, R. (2009). A challenge for machine ethics. Minds and Machines, 19(3), 421–438. https://doi.org/10.1007/s11023-009-9159-1
    • This article argues that in order for machine ethics to succeed in its mission of creating ethical robots, it must identify and adopt an ethical framework that is both implementable into machines and that permits the creation of ethical robots in the first place. According to the article, robots equipped with a framework that was merely implementable would not be genuine ethical robots. The article asserts that Kantian ethics, despite its growing currency as a machine-implementable framework, does not permit the development of ethical robots and so cannot serve as the principal framework of machine ethics.
  • Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford University Press.*
    • This book argues that as robots are given more and more responsibility, there is a correspondent imperative to make them capable of morally aware decision-making. It goes further to assert that while achieving full moral agency for machines is a distant goal, the imperative is already urgent enough to require measures which introduce basic moral considerations into robotic decision-making.
  • Wallach, W., Allen, C., & Smit, I. (2008). Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI & Society, 22(4), 565–582. https://doi.org/10.1007/s00146-007-0099-0*
    • This article outlines the values and limitations of bottom-up and top-down approaches to constructing morally intelligent artificial agents. According to the article, bottom-up approaches are characterized by the combination of subsystems into a complex assemblage which models behavior that is consistent with ethical principles. By contrast, the article explains that top-down approaches involve the direct computerization of ethical principles as prescriptive rules.
  • Weinberger, D. (2011). Too big to know: Rethinking knowledge now that the facts aren’t the facts, experts are everywhere, and the smartest person in the room is the room. Basic Books.*
    • This book argues that Internet Era shifts in the production, exchange, and storage of knowledge—far from signaling a systemic collapse—present a fundamental epistemic breakthrough. The book contends that although the authority of ordinary facts, books, and experts has depreciated in the transition, “networked knowledge” permits knowledge-seekers to attain better understanding and make more informed decisions.

Chapter 3. Ethical Issues in Our Relationship with Artificial Entities (Judith Donath)⬆︎ 

  • Bisconti Lucidi, P., & Nardi, D. (2018). Companion robots: The hallucinatory danger of human-robot interactions. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 17-22).
    • Focusing mainly on robots caring for the elderly, this paper analyzes ethical concerns raised by the rise of companion robots to distinguish which concerns are directly ascribable to robotics, and which are instead pre-existent. The paper argues that one concern, the “deception objection,” namely the ethical unacceptability of deceiving the user about the simulated nature of the robot’s behavior, is inconsistently formulated. The paper’s central argument is that the main concern about companion robots is the simulation of a human-like interaction in the absence of an autonomous robotic horizon of meaning.
  • Bourne, C. (2019). AI cheerleaders: Public relations, neoliberalism and artificial intelligence. Public Relations Inquiry, 8(2), 109-125.
    • The article combines public relations (PR) theory, communications theory and political economy to consider the changing shape of neoliberal capitalism, as AI becomes naturalized as “common sense” and as a “public good.” The article explores how PR supports AI discourses, including promoting AI in national competitiveness and promoting “friendly” AI to consumers, while promoting Internet inequalities.
  • Broom, D. M. (2014). Sentience and animal welfare. Centre for Agriculture and Biosciences International.*
    • This book focuses on sentience—the ability to feel, perceive, and experience—in order to answer questions raised by the animal welfare debate, such as whether animals experience suffering in life and death. The book defines aspects of sentience such as consciousness, memory and emotions, and discusses the brain complexity in detail. Looking at sentience from a developmental perspective, it analyses when during an individual’s growth sentience can be said to appear and uses evidence from a range of studies investigating embryos, fetuses, and young animals to form an overview of the subject.
  • Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103(3), 513-563.*
    • This article examines implications of the introduction of robotics for cyberlaw and policy. The article argues that robotics will prove exceptional in the sense of occasioning systemic changes to law, institutions and the legal academy. However, the article also argues that many core insights and methods of cyberlaw will prove crucial in integrating robotics.
  • Coeckelbergh, M. (2010). Artificial companions: Empathy and vulnerability mirroring in human-robot relations. Studies in Ethics, Law, and Technology, 4(3), 1-17.
    • This article argues that the possibility and future of robots as companions depends, among other things, on the robots’ capacity to be a recipient of human empathy, and that one necessary condition for this to happen is that the robots mirror human vulnerabilities. The article considers the objection that vulnerability mirroring raises the ethical issue of deception. It refutes this objection through demonstrating the underlying assumptions to the objection cannot be easily justified, given the importance of appearance in social relations, problems with the concept of deception, and contemporary technologies that question the artificial-natural distinction.
  • Coghlan, S., Vetere, F., Waycott, J., & Neves, B. B. (2019). Could social robots make us kinder or crueller to humans and animals? International Journal of Social Robotics, 11(5), 741-751.
    • Concentrating on robot animals, this paper examines strengths and weaknesses to the idea of a causal link between cruelty and kindness to artificial and living beings, human or animal. The article finds that there is some basis for thinking that social robots may causally affect virtue, especially in terms of the moral development of children and responses to nonhuman animals.
  • Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human–robot co-evolution. Frontiers in Psychology, 9, 468. https://doi.org/10.3389/fpsyg.2018.00468
    • This article proposes a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction and rebuts arguments that condemn “anthropomorphism-based” social robots a priori. To address the relevant ethical issues, this article promotes an experimentally based ethical approach to social robotics, titled “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.
  • DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70(5), 979-995.*
    • This article compares two diary studies of lying, where 77 college students reported telling two lies a day, and 70 community members told one. Consistent with the view of lying as an everyday social interaction process, participants said that they did not regard their lies as serious and did not plan them much or worry about being caught. Still, social interactions in which lies were told were less pleasant and less intimate than those in which no lies were told.
  • Donath, J. (2019). The robot dog fetches for whom? In Z. Papacharissi (Ed.), A networked self and human augmentics, artificial intelligence, sentience (pp. 10-24). Routledge.*
    • This article examines the landscape of social robots, including robot dogs, and their effect on human empathy and relationships. Particularly, this article questions whom robot companions will truly serve in a future where they are ubiquitous.
  • Godfrey-Smith, P. (2016). Other minds: The octopus and the evolution of intelligent life. William Collins.*
    • This book explores the evolution and nature of consciousness, explaining that complex active bodies that enable and require a measure of intelligence have evolved three times, in arthropods, cephalopods, and vertebrates. The book reflects on the nature of cephalopod intelligence in particular, constrained by their short lifespan, and embodied in large part in their partly autonomous arms which contain more nerve cells than their brains.
  • Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20(4), 291-301.
    • The animal­-robot analogy is one of the most commonly used in attempting to frame interactions between humans and robots and it also tends to push in the direction of blurring the distinction between humans and machines. This article argues that, despite some shared characteristics, analogies with animals are misleading when it comes to thinking about the moral status of humanoid robots, legal liability, and the impact of treatment of humanoid robots on how humans treat one another.
  • Kaplan, F. (2004). Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots. International Journal of Humanoid Robotics, 1(3), 465-480.*
    • This article presents a preliminary exploration of several aspects of the Japanese culture and a survey of the most important myths and novels involving artificial beings in Western literature. The article examines particular cultural features that may account for contemporary differences in our behavior towards humanoids.
  • Kappas, A., Stower, R., & Vanman, E. J. (2020). Communicating with robots: What we do wrong and what we do right in artificial social intelligence, and what we need to do better. In R. J. Sternberg & A. Kostić (Eds.), Social intelligence and nonverbal communication (pp. 233-254). Palgrave Macmillan.
    • This chapter discusses the challenges and pitfalls regarding the interaction of humans and machines with a view to (artificial) social intelligence and a time of challenging interdisciplinary research. The chapter presents concrete examples of such research and points out lacunae in empirical data.
  • Nyholm, S., & Smids, J. (2019). Can a robot be a good colleague? Science and Engineering Ethics. https://doi.org/10.1007/s11948-019-00172-6
    • This article compares the question of whether robots can be good colleagues to the more widely discussed questions of whether robots can be our friends or romantic partners. The paper argues that on a behavioral level, robots can fulfil many of the criteria typically associated with being a good colleague. This paper further asserts that in comparison with the more demanding ideals of being a good friend or a good romantic partner, it is comparatively easier for a robot to live up to the ideal of being a good colleague.
  • Remmers, P. (2019). The ethical significance of human likeness in robotics and AI. Ethics in Progress, 10(2), 52-67.
    • This article argues that there are no serious ethical issues involved in the theoretical aspects of technological human likeness, suggesting instead that although human likeness may not be ethically significant on the philosophical and conceptual levels, strategies to use anthropomorphism in the technological design of human-machine collaborations are ethically significant, because artificial agents are specifically designed to be treated in ways we usually treat humans.
  • Singer, P. (2011). Practical ethics. Cambridge University Press.*
    • This book is a classic introduction to the study of practical ethics. The focus of the book is the application of ethics to difficult and controversial social questions: equality and discrimination by race, sex, ability, or species; abortion, euthanasia, and embryo experimentation; the moral status of animals; political violence and civil disobedience; overseas aid and the obligation to assist others; responsibility for the environment; the treatment of refugees. The book is structured to show how contemporary controversies often have deep philosophical roots; and presents a unique ethical theory that can be applied consistently to all the practical cases.
  • Sheridan, T. B. (2020). A review of recent research in social robotics. Current Opinion in Psychology. https://doi.org/10.1016/j.copsyc.2020.01.003
    • This review finds that both because of its newness and because of its narrower psychological rather than technological emphasis, research in social robotics tends currently to be concentrated in a single journal and single annual conference. This review categorizes such research into three areas: (a) Affect, Personality and Adaptation; (2) Sensing and Control for Action; and (3) Assistance to the Elderly and Handicapped.
  • Turing, A. (1950). Computing machinery and intelligence. Mind, 49, 433-460.*
    • This is a seminal paper on the topic of artificial intelligence, the first to introduce Alan Turing’s concept of what is now known as the Turing Test to the general public. The article is an investigation into the prospects of a tool emerging that would be able to flawlessly function as a spatial extension of the human intellect.
  • Turkle, S. (2007). Authenticity in the age of digital companions. Interaction Studies, 8(3), 501-517.*
    • This paper examines watershed moments in the history of human–machine interaction, focusing on the pertinence of relational artifacts to our collective perception of aliveness, life’s purposes, and the implications of relational artifacts for relationships. The paper argues that the exploration of human–robot encounters leads to questions about the morality of creating believable digital companions that are evocative but not authentic.
  • Vanman, E. J., & Kappas, A. (2019). “Danger, Will Robinson!” The challenges of social robots for intergroup relations. Social and Personality Psychology Compass, 13(8). https://doi.org/10.1111/spc3.12489
    • This article explores the paradox created by human-like robots, as they simultaneously generate greater empathy than traditional robots while also eliciting greater suspicion, particularly about their ability to deceive. Discussing these findings from an intergroup relations perspective, this article proposes three research questions that the authors believe social psychologists are ideally suited to address.
  • Weizenbaum, J. (1976). Computer power and human reason. W. H. Freeman and Company.*
    • This book examines the sources of the computer’s powers and offers evaluative explorations of what computers can do, cannot do, and should not be employed to do. The author argues that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom that are necessary for genuine choice to take place.
  • Weizenbaum, J. (1967). Contextual understanding by computers. Communications of the ACM, 10(8), 474-480.*
    • This paper discusses a further development of a computer program, ELIZA, capable of conversing in natural language, stressing the importance of context to both human and machine understanding. The paper argues that the adequacy of the level of understanding achieved in a particular conversation depends on the purpose of that conversation, and that absolute understanding on the part of either humans or machines is impossible.

II. Frameworks & Modes

Chapter 4. AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing (Karen Yeung, Andrew Howes and Ganna Pogrebna)⬆︎

  • Algorithm Watch. (2019). AI Ethics Guidelines Global Inventory. https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/
    • This is a global inventory of ethical guidelines for Artificial Intelligence (AI). The authors find that the absence of internal enforcement or governance mechanisms shows that many companies are merely “virtue signaling” with their guidelines. However, others can still try to hold the companies to account, be it the companies’ own employees, outside institutions like advocacy organizations, or academics.
  • Casanovas, P., Pagallo, U., & Madelin, R. (2019). The middle-out approach: Assessing models of legal governance in data protection, artificial intelligence and the Web of Data. The Theory and Practice of Legislation, 7(1), 1-25.
    • This paper focuses on what lies between top-down and bottom-up approaches to governance and regulation, namely the middle-out interface that is typically associated with forms of co-regulation. From a methodological viewpoint, this paper examines the middle-out approach in order to shed light on three different kinds of issues: (i) how to strike a balance between multiple regulatory systems; (ii) how to align primary and secondary rules of the law; and (iii) how to properly coordinate bottom-up and top-down policy choices. The paper argues that increasing complexity of technological regulation recommends new models of governance that revolve around this middle-out analytical ground.
  • Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Mathematical, Physical and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.0080
    • This paper is the introduction to the special issue entitled: ‘Governing artificial intelligence: ethical, legal and technical opportunities and challenges’. The issue addresses how AI can be designed and governed to be accountable, fair and transparent. Eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems.
  • Council of Europe consultative committee on the convention for the protection of individuals with regard to automating processing of personal data. (2019). Guidelines on Artificial Intelligence and Data Protection. https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8*
    • These guidelines, created by the Council of Europe, provide a set of baseline measures that governments, AI developers, manufacturers, and service providers should follow to ensure that AI applications do not undermine the human dignity and the human rights and fundamental freedoms of every individual, in particular with regard to the right to data protection.
  • Donahoe, E., & Metzger, M. (2019). Artificial intelligence and human rights. Journal of Democracy, 30(2), 115-126.
    • This article argues for a global governance framework to address the wide range of societal challenges associated with AI, including threats to privacy, information access, and the right to equal protection and nondiscrimination. Rather than working to develop new frameworks from scratch, we argue that the challenges associated with AI can best be confronted by drawing on the existing international human-rights framework.
  • Ghallab, M. (2019). Responsible AI: Requirements and challenges. AI Perspectives, 1(1), 1-7.
    • This paper discusses the requirements and challenges for responsible AI with respect to two interdependent objectives: (1) how to foster research and development efforts toward socially beneficial applications, and (2) how to take into account and mitigate the human and social risks of AI systems.
  • Hildebrandt, M. (2015) Smart technologies and the end(s) of law. Edward Elgar.*
    • This book highlights how the pervasive employment of machine-learning technologies that inform so-called ‘data-driven agency’ threaten privacy, identity, autonomy, non-discrimination, due process and the presumption of innocence. The author argues that smart technologies undermine, reconfigure and overrule the ends of the law in a constitutional democracy, jeopardizing law as an instrument of justice, legal certainty and the public good. However, the author calls on lawyers, computer scientists and civil society not to reject smart technologies, arguing that further engaging with these technologies may help to reinvent the effective protection of the rule of law.
  • Hoffmann-Riem, W. (2020). Artificial intelligence as a challenge for law and regulation. In Regulating artificial intelligence (pp. 1-29). Springer.
    • This chapter of Regulating Artificial Intelligence explores the types of rules and regulations that are currently available to regulate AI, while emphasizing that it is not enough to trust that companies that use AI will adhere to ethical principles. Rather, supplementary legal rules are needed, as company self-regulation is insufficient to promote ethical use of AI. The chapter concludes by stressing the need for transnational agreements and institutions in this area.
  • Hopkins, A. (2012) Explaining “Safety Case.” (Regulatory Institutions Network Working Paper 87).  https://www.csb.gov/assets/1/7/workingpaper_87.pdf*
    • This paper emphasizes features of safety case regimes that are sometimes taken for granted in the jurisdictions where they operate, drawing particularly on UK and Australian offshore safety case regimes and sets out a model of what might be described as a mature safety case regime. There are five basic features of safety case regimes that are highlighted in this paper: a risk- or hazard-management framework, a requirement to make the case to the regulator, a competent and independent regulator, workforce involvement, and a general duty of care imposed on the operator.
  • Kloza, D., van Dijk, N., Gellert, R., Böröcz, I., Tanas, A., Mantovani, E., & Quinn, P. (2017). Data protection impact assessments in the European Union: Complementing the new legal framework towards a more robust protection of individuals. Brussels Laboratory for Data Protection & Privacy Impact Assessments.*
    • This policy brief provides recommendations for the European Union (EU) to complement the requirement for data protection impact assessment (DPIA), as set forth in the General Data Protection Regulation (GDPR), with a view of achieving a more robust protection of personal data. The policy brief attempts to draft a best practice for a generic type of impact assessment to remedy weak points in the DPIA requirement. The brief also provides background information on impact assessments as such: definition, historical overview, and their merits and drawbacks, and concludes by offering recommendations for complementing the DPIA requirement in the GDPR.
  • Mantalero, Alessandro (2018) AI and data protection, challenges and possible remedies. Council of Europe. https://rm.coe.int/artificial-intelligence-and-data-protection-challenges-and-possible-re/168091f8a6*
    • This report examines the current landscape of AI regulation and data protection, and argues that  it is important to extend European regulatory leadership in the field of data protection to a value-oriented regulation of AI based on the following three precepts: values-based approach (encompassing social and ethical values), risk assessment and management and participation.
  • McGregor, L. (2018). Accountability for governance choices in artificial intelligence: Afterword to Eyal Benvenisti’s foreword. European Journal of International Law, 29(4), 1079-1085.
    • This paper argues that if the ‘culture of accountability’ is to adapt to the challenges posed by new and emerging technologies, the focus cannot only be technology-led. It further argues that a culture of accountability must also be interrogative of the governance choices that are made within organizations, particularly those vested with public functions at the international and national level. 
  • Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089.*
    • This paper describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless Internet and the relationship between technology and the law as it has developed in the Internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws.
  • Raso, F. A., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L. (2018). Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center for Internet & Society Research Publication. http://nrs.harvard.edu/urn-3:HUL.InstRepos:38021439*
    • This report advances the emerging conversation on AI and human rights by evaluating the human rights impacts of six current uses of AI. The report’s framework recognizes that AI systems are not being deployed against a blank slate, but rather against the backdrop of social conditions that have complex pre-existing human rights impacts of their own.
  • Rieke, A., Bogen, M., & Robsinson, D. G. (2018) Public scrutiny of automated decisions: Early lessons and emerging methods. Upturn and Omidyar Network. https://www.omidyar.com/insights/public-scrutiny-automated-decisions-early-lessons-and-emerging-methods*
    • This report maps out the landscape of public scrutiny of automated decision-making, both in terms of what civil society was or was not doing in this nascent sector and what laws and regulations were or were not in place to help regulate it. The report is based on extensive review of computer and social science literature, a broad array of real-world attempts to study automated systems, and dozens of conversations with global digital rights advocates, regulators, technologists, and industry representatives.
  • Risse, M. (2019). Human rights and artificial intelligence: An urgently needed agenda. Human Rights Quarterly, 41(1), 1-16.
    • This article reviews short, medium, and long-term challenges for human rights posed by AI. It argues that among the short-term challenges are ways in which technology engages just about all rights on the UDHR, as exemplified through use of effectively discriminatory algorithms. It further asserts that medium-term challenges include changes in the nature of work that could call into question many people’s status as participants in society, and in the long-term humans may have to live with machines that are intellectually and possibly morally superior, even though this is highly speculative.
  • Smuha, N. A. (2020). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. http://dx.doi.org/10.2139/ssrn.3543112
    • This paper argues that, without elucidating the applicability and enforceability of human rights in the context of AI; adopting legal rules that concretize those rights where appropriate; enhancing existing enforcement mechanisms; and securing an underlying societal infrastructure that enables human rights in the first place, any human rights-based governance framework for AI risks falling short of its purpose.
  • Truby, J. (2020). Governing artificial intelligence to benefit the UN sustainable development goals. Sustainable Development. https://doi.org/10.1002/sd.2048
    • This article proposes effective preemptive regulatory options to minimize scenarios of Artificial Intelligence (AI) damaging the U.N.’s Sustainable Development Goals. It explores internationally accepted principles of AI governance, and argues for their implementation as regulatory requirements governing AI developers and coders, with compliance verified through algorithmic auditing. The article argues that proactively predicting such problems can enable continued AI innovation through well‐designed regulations adhering to international principles.
  • Vestby, A., & Vestby, J. (2019). Machine learning and the police: Asking the right questions. Policing: A Journal of Policy and Practice. https://doi.org/10.1093/police/paz035
    • The article argues that important issues concerning Machine Learning (ML) decision models can be unveiled without detailed knowledge about the learning algorithm, empowering non-ML experts and stakeholders in debates over if, and how to, include them, for example, in the form of predictive policing. Non-ML experts can, and should, review ML models. We provide a ‘toolbox’ of questions about three elements of a decision model that can be fruitfully scrutinized by non-ML experts: the learning data, the learning goal, and constructivism.
  • Yeung, K. (2018). A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. Council of Europe. https://ssrn.com/abstract=3286027*
    • This report examines the implications of digital technologies for the concept of responsibility, investigating where responsibility should lie for their adverse consequences. The study explores (a) how human rights and fundamental freedoms protected under the European Convention on Human Rights may be adversely affected by the development of AI technologies and (b) how responsibility for those risks and consequences should be allocated.

Chapter 5. The Incompatible Incentives of Private Sector AI (Tom Slee)⬆︎ 

  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review 104(3), 671-732. https://dx.doi.org/10.2139/ssrn.2477899
    • This seminal article uses American anti-discrimination law to argue the importance of disparate impact doctrine when considering the effects of big data algorithms. It advocates for a paradigm shift in anti-discrimination law as the nature of these algorithms calls into question what “fairness” and “discrimination” mean in the digital age. The ideas conveyed in this article reflect the growing movement around fairness, accountability, and transparency in the machine learning community.
  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. John Wiley & Sons.
    • This book examines the modern-day relevance of the “Jim Crow” laws that enforced racial segregation in the Southern United States. It argues that emerging technologies such as artificial intelligence can deepen inequities by “explicitly amplifying racial hierarchies,” even when they may seem neutral or benevolent at first glance.
  • Blasimme, A., Vayena, E., & Van Hoyweghen, I. (2019). Big data, precision medicine and private insurance: A delicate balancing act. Big Data & Society, 6(1). https://doi.org/10.1177%2F2053951719830111
    • Using national precision medicine initiatives as a case study, this article explores the tension between private insurers leveraging repositories of genetic and phenotypic data for economic gain and the utility of these databases as a public, scientific resource. Although the authors admit that information asymmetry between insurance companies and their policyholders still leads to risks in reduced research participation, adverse selection, and discrimination, they argue that a governance model underpinned by trustworthiness, openness, and evidence can balance these competing interests.
  • Bowker, G. C., & Star, S. L. (2000). Sorting things out: Classification and its consequences. MIT Press.*
    • Classification, or the process of grouping something according to shared qualities or characteristics, is a foundational division of machine learning problems. This book examines how classification, as an information infrastructure, has shaped human society from social, moral, and political standpoints. The authors draw numerous examples from health and medicine (e.g. the International Classification of Diseases, classification of viruses) but also dedicate a chapter to racial classification during Apartheid.
  • Bucher, T. (2018). If… then: Algorithmic power and politics. Oxford University Press.
    • This book outlines how algorithms enter our social fabric and then act as political agents to “shape social and cultural life.” The author articulates her key contributions as: (1) offering a new ontology for algorithms, (2) identifying various forms of algorithm power and politics, and (3) providing a theoretical framework for the actions of algorithms.
  • Calo, R., & Rosenblat, A. (2017). The taking economy: Uber, information, and power. Columbia Law Review, 117(6), 1623-1690.
    • Technology companies such as Uber and AirBnB have popularized the “sharing economy,” where goods and services are shared between private individuals over the internet. This article argues that asymmetries of information and power are fundamental to understanding and critiquing the sharing economy. For an effective legal response to prevent these companies from abusing their users, the authors claim that regulators must gain insight into the how digital data is manipulated and remove the incentives for abusing these asymmetries.
  • Espeland, W. N., & Sauder, M. (2016). Engines of anxiety: Academic rankings, reputation, and accountability. Russell Sage Foundation.*
    • Goodhart’s Law states that: “When a measure becomes a target, it ceases to be a good measure.” This book explores how the rankings of United States law schools has profoundly shaped legal education through the creation of an all-defining hierarchy. Through the analysis of observational data and interviews with members of the legal profession, the authors have revealed that in the pursuit of maximizing their rankings, law schools have negatively impacted their students, educators, and administrators.
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
    • This book investigates how big data algorithms are systematically used to oppress the poor in the United States. The author’s approach is that of a storyteller, taking readers into the lives of individuals as they are “profiled, policed, and punished.” Social justice is central to the book’s argument as it advocates not for the feckless application of technology, but rather a deep, humane commitment to the eradication of poverty.
  • Harcourt, B. E. (2008). Against prediction: Profiling, policing, and punishing in an actuarial age. University of Chicago Press.*
    • Actuarial science involves the application of mathematics and statistics to assess and manage risk. This book challenges the successfulness attributed to actuarial methods in criminal justice and argues they have instead warped the notion of “just punishment” and make life more difficult for the poor and marginalized.
  • Jacobs, J. (1961). The death and life of great American cities. Random House.*
    • This book is a critique of urban planning the 1950s, arguing that problematic policy is to blame for the decline of neighborhoods across the United States. In the author’s view, cities take on a life akin to a biological organism where a healthy city is one characterized by diversity, a sense of community, and thriving streets that draw habitants into cafes, restaurants, and other places of gathering. The author contrasts the healthy city with government housing projects to demonstrate the separation of the haves and have nots, a trend that is now being automated with big data and machine learning algorithms.
  • Khan, L. M. (2016). Amazon’s antitrust paradox. The Yale Law Journal, 126(3), 710-805.
    • Antitrust laws exist to protect consumers from predatory or monopolistic business practices. The author argues current antitrust laws fail to capture the reality of Amazon’s position as a digital platform because Amazon: (1) is incentivized to pursue growth over profit and (2) controls the infrastructure that enables its rivals to function.
  • MacKenzie, D. (2007). An engine, not a camera: How financial models shape markets. MIT Press).*
    • This book combines concepts from finance, sociology, and economics to argue that economic models not only capture trends about markets but rather shape them. The author contextualizes his argument through examples of financial crises that occurred in 1987 and 1998, although parallels can also be made to the 2007 subprime mortgage crisis. These concepts from economic models naturally can be extended to algorithms.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
    • This book critiques the notion that search engines equally promote “all ideas, identities, and activities” and argues that they rather serve as a platform for racism and sexism. It stresses that results provided by Google, Bing, or other engines are not neutral but rather “reflect the political, social, and cultural values of the society [they] operate within.” In latter chapters, the author extends her argument to the broader work conducted by professionals in library and information science.
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
    • This book takes a wide survey of how big data algorithms affect society and draws examples from education, advertising, criminal justice, employment, and finance. The author places special emphasis on drawing attention to areas of society where it is not immediately clear that algorithms are making decisions. The three characteristics of a “Weapon of Math Destruction” include: (1) scale, (2) secrecy, and (3) destructiveness.
  • Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
    • This book draws attention to the secrecy and complexity of algorithms being used on Wall Street and in Silicon Valley. The author also argues that demanding transparency is only part of the solution, and that the decisions of these algorithms must be held to the standards of fairness, non‑discrimination, and openness to criticism.
  • Rosenblat, A. (2018). Uberland: How algorithms are rewriting the rules of work. University of California Press.
    • This book takes an ethnographic approach to unveil how Uber asserts control over its drivers and has also shaped the dialogue in areas such as sexual harassment and racial equity. Through interviews with drivers across the United States and Canada, the author grapples with ideas such as freedom, independence, and flexibility touted by the company while also illuminating its pervasive surveillance and information asymmetries.
  • Schelling, T. C. (1978). Micromotives and macrobehavior. WW Norton & Company.*
    • This book expands on the idea of the “tipping point” first proposed by Morton Grodzins. The tipping point refers to when a group rapidly adopts a previously rare, and seemingly unimportant practice and undergoes a rapid, significant change as a result. A major theme in this book is “social sorting”, such as when neighborhoods cluster by race due to the preference of inhabitants to live around people that look like themselves.
  • Scott, J. C. (1998). Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press.*
    • This book offers a critique of the top-down social planning done by states around the world and insights into why they fail. Four conditions common to failed social planning initiatives include: (1) an attempt to impose order on society and nature, (2) a belief that science can improve all aspects of life, (3) willingness to resort to authoritarianism, and (4) a helpless civil society.
  • Wachter, R. M., & Cassel, C. K. (2020). Sharing health care data with digital giants: Overcoming obstacles and reaping benefits while protecting patients. JAMA, 323(6), 507-508.
    • In response to the steady stream of news updates around the entry and involvement of the major technology companies (e.g. Google, Apple, Amazon) into healthcare, this commentary proposes ideals for a collaborative path forward. It emphasizes transparency (especially around financial disclosures and conflicts of interest), direct consultation with patients/patient advocacy groups, and data security.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
    • This book draws a common thread between digital technology companies by arguing that they engage in “surveillance capitalism.” Surveillance capitalists provide free services for behavioral data, which are then used to create “prediction products” of future consumer behaviour. These products are then traded in “behavioral futures markets,” which generates large amounts of wealth for surveillance capitalists. The author argues that surveillance capitalism is becoming a dominating force in not just economics, but society as a whole.

Chapter 6. Normative Modes: Codes and Standards (Paula Boddington)⬆︎ 

  • Atkinson, P. (2009). Ethics and ethnography. Twenty-First Century Society, 4(1), 17-30. http://doi.org/10.1080/17450140802648439*
    • This paper, drawing on previous work, concentrates solely on how ethnographic research is conducted. Atkinson argues that the field lacks development, specifically as it relates to sociology and anthropology. Field research in ethnography has practical challenges for regulation, exposing an insufficient understanding of social life embedded into today’s regulatory regimes.
  • Balfour, D., Adams, G., & Nickels, A.E. (2014). Unmasking administrative evil. Routledge.*
    • This book argues that there is a deep-seated administrative evil present into the state of public affairs, resulting in crimes against humanity such as genocide. By performing duties in line with their occupation, agents can not only disregard their participation in this administrative evil, but also can suffer for moral inversion: participating in evil yet believing what they are doing to be morally good.
  • Baumer, D. L., Earp, J. B., & Poindexter, J.C. (2004). Internet privacy law: A comparison between the United States and the European Union. Computers & Security 23(5): 400-412. https://doi.org/10.1016/j.cose.2003.11.001 *
    • This article compares privacy law in the United States to privacy law in the European Union, examining these laws as they relate to the regulation of websites and online service providers. A central issue to regulation is that privacy laws and practices vary by region, however the Internet is world-wide.
  • Benkler, Y. (2019). Don’t let industry write the rules for AI. Nature, 569(7754), 161-162. https://www.doi.org/10.1038/d41586-019-01413-1
    • This article argues that technology companies seek to influence AI regulation for the benefit of their companies. To combat this, Benkler argues that governments need to use leverage to limit company influence on policy.
  • Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Springer.*
    • This book works toward understanding the task of producing ethical codes and regulations in the rapidly advancing field of artificial intelligence, examining ethical and practical issues in the development of these codes. Boddington’s book creates a resource for those who wish to address the ethical challenges of artificial intelligence research.
  • Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Luciano, F.(2018). Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethics, 24, 505–528. https://doi.org/10.1007/s11948-017-9901-7
    • This article compares three reports published in October 2016 by the White House, the European Parliament, and the United Kingdom House of Commons on how to prepare society for the emergence of AI. This article uses these reports to provide a framework for developing good AI policy. The authors argue that these reports fail to express a long-term strategy for developing a good AI society, and concludes with a two-pronged solution to fill this gap.
  • Gunning, D. (2017). Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency, DARPA/I20.*
    • This presentation outlines that need for user-friendly artificial intelligence, wherein users can understand, trust, and administer these entities. Current AI systems, while extremely useful, have greatly diminished effectiveness as the machines do not often explain their actions to users.  
  • Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) Program. AI Magazine40(2), 44-58. https://doi.org/10.1609/aimag.v40i2.2850
    • This article provides a detailed look into DARPA’s four-year explainable artificial intelligence (XAI) program. The XAI program aimed to develop AI systems whose operations can be understood and trusted by the user.
  • Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds & Machines 30, 99–120. https://doi.org/10.1007/s11023-020-09517-8
    • This article performs a semi-systematic analysis and comparison of 22 Ethical AI guidelines, highlighting omissions and well as commonalities. Hagendorff also examines how these ethical principles are implemented in research and creation of AI systems, and how this application can be improved.
  • House of Lords Select Committee on Artificial Intelligence. (2018). AI in the UK: Ready, willing and able? Report of First Session 2017-19. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf *
    • This report considers the ethical, societal, and economic implications of the development of AI, concluding the United Kingdom has the potential to be a global leader in the field. The Select Committee on Artificial Intelligence finds that AI can potentially solve complex problems and improve productivity, and that potential risks can be mitigated. 
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
    • In recent years, private companies, academic institutions, and governments have created principles and ethical codes for artificial intelligence. Despite consensus that AI must be ethical, there is no widespread agreement about the requirements of ethical AI. This article maps and analyzes current ethical principles and codes as they relate to AI.
  • Metzinger, T. (2018). Towards a global artificial intelligence charter. In European Parliament Research Service (Ed), Should We Fear Artificial Intelligence? pp. 27–33.
    • Metzinger argues that the debate in the public sphere on artificial intelligence must move into political institutions. These institutions must produce a set of ethical and legal constraints on the development and use of AI that are sufficient while remaining minimally intrusive. Metzinger provides a list of the five more important problem domains in the field of AI ethics, and gives recommendations for each.
  • Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review79(119), 119-158.
    • This article argues that conventional theoretical approaches to privacy employed for common privacy concerns are not sufficient to yield appropriate conclusions in light of the development of public surveillance. Nissenbaum argues for a new construct, contextual integrity, that will act as a replacement for traditional theories of privacy.
  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems IEEE. https://ethicsinaction.ieee.org *
    • This treatise is a globally crowdsourced, collaborative source based on a previous call for input and two hundred pages of feedback. The treatise aims to provide practical insights, and to act as a reference work for professionals involved in the ethics of artificial intelligence. Included in the treatise are policy recommendations.
  • Weller, A. (2017). Challenges for transparency. In W. Samek, G. Montavon, A. Vedaldi, L. Hansen, & K. R. Müller (Eds.), Explainable AI: Interpreting, explaining and visualizing deep learning (pp 23-40). Springer.
    • This chapter provides an overview of the concept of transparency, of which there are varying types, and whose definition varies based on context. It is difficult to determine objective criteria to measure transparency in light of this. Weller examines contexts wherein transparency can cause harm.
  • Whittlestone, J., Nyrup R., Alexandrova A., Dihal K., & Cave S. (2019). Ethical and societal implications of algorithms, data, and artificial intelligence: A roadmap for research. Nuffield Foundation. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf*
    • This report acts as a roadmap for published work on the implications algorithms, data and AI (ADA) have for ethics and society.  There is no agreed upon ethical core or framework for issues relating to ADA, as even well-established issues such as bias, transparency and consent have different interpretations depending on context.
  • Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 195-200). https://doi.org/10.1145/3306618.3314289
    • This article draws on comparisons within the field of bioethics to highlight limitations of principles applied to AI ethical guidelines, such as fairness, privacy, and autonomy. The authors argue that the field of AI ethics needs to progress to exploring tensions that exist within these established principles. They offer potential solutions to these tensions.
  • Winfield, A. F., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2133). https://doi.org/10.1098/rsta.2018.0085
    • This paper examines ethical governance for artificial intelligence systems and robots. The authors argue that ethical governance is needed in order to create public trust in these new technologies. They conclude by proposing five pillars of effective ethical governance.
  • Winfield, A. F., Michael, K., Pitt, J., & Evers, V. (2019). Machine ethics: The design and governance of ethical AI and autonomous systems. Proceedings of the IEEE107(3), 509-517.
    • This paper focuses on the fourth industrial revolution, which includes AI, and machine learning systems, as discussed in the 2016 World Economic Forum at Davos. It argues that the economic and societal implications surrounding the fourth industrial revolution are no longer only of concern to academics, but rather are important for politics and public debate.
  • Zeng, Y., Lu, E., & Huangfu, C. (2018). Linking artificial intelligence principles. In Proceedings of the AAAI Workshop on Artificial Intelligence Safety (AAAI-Safe AI 2019).
    • In this article, the authors propose LAIP (Linking artificial intelligence principles) as a framework for analyzing various AI principles. Rather than adopting one pre-developed set of AI principles, the authors propose combining existing frameworks, allowing for interaction.

Chapter 7. The Role of Professional Norms in the Governance of Artificial Intelligence (Urs Gasser and Carolyn Schmitt)⬆︎

  • Abbott, A. (1983). Professional ethics. American Journal of Sociology, 88(5), 855-885. https://doi.org/10.1086/227762*
    • Through comparative analysis, this paper establishes five basic properties of professional ethics codes: universal distribution, correlation with intra-professional status, enforcement dependent on visibility, individualism, and emphasis on college obligations. The paper then adds a third perspective, relating ethics directly to intra-and extra professional status. Finally, the authors analyze developments in professional ethics in America since 1900 thus specifying the interplay of the three processes hypothesized in the competing perspectives.
  • Anthony, K. H. (2001). Designing for diversity: Gender, race, and ethnicity in the architectural profession. University of Illinois press.*
    • This book argues that the traditional mismatch between diverse consumers and predominantly white male producers of the built environment, plus the shifting population balance toward communities of color, leads to the architectural profession’s lack of true diversity, at its own peril.
  • Bynum, T. W., & Simon, R. (2004). Computer ethics and professional responsibility. Wiley Blackwell.*
    • This book provides discussion of topics such as the history of computing; the social context of computing; methods of ethical analysis; professional responsibility and codes of ethics; computer security, risks and liabilities; computer crime, viruses and hacking; data protection and privacy; intellectual property and the “open source” movement; global ethics and the internet.
  • Dasgupta, N. (2011). Ingroup experts and peers as social vaccines who inoculate the self-concept: The stereotype inoculation model. Psychological Inquiry, 22(4), 231-246. https://doi.org/10.1080/1047840X.2011.607313
    • This paper argues that an individual’s choice can be subtly influenced by cues in the academic environment that leads to their inclusion or exclusion from that professional path. The paper uses the ‘stereotype inoculation model’ to explain this event.
  • Davis, M. (2015). Engineering as profession: Some methodological problems in its study. Engineering Identities, Epistemologies and Values (pp. 65-79). Springer.*
    • This text considers engineering practice including contextual analyses of engineering identity, epistemologies and values. It examines such issues as an engineering identity, engineering self-understandings enacted in the professional world, distinctive characters of engineering knowledge and how engineering science and engineering design interact in practice.
  • Evetts, J. (2003). The sociological analysis of professionalism: Occupational change in the modern world. International Sociology, 18(2), 395-415. https://doi.org/10.1177%2F0268580903018002005*
    • The paper explores the appeal of the concepts of profession and professionalism and the increased use of these concepts in different occupational groups, work contexts and social systems. It also considers how the balance between the normative and ideological elements of professionalism is played out differently in occupational groups in different employment situations.
  • Frankel, M. S. (1989). Professional codes: Why, how, and with what impact? Journal of Business Ethics, 8(2-3), 109-115. https://doi.org/10.1007/BF00382575*
    • This paper argues that a tension between a professions’ pursuit of autonomy and the public’s demand for accountability has led to the development of codes of ethics which act as both foundations and guides for professional conduct in the face of morally ambiguous situations. Three types of codes are identified in the paper — aspirational, educational and regulatory.
  • Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences (pp. 2122 – 2131). https://hdl.handle.net/10125/59651*
    • This paper argues vision statements for ethical artificial intelligence and machine learning (AI/ML) co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work. This argument is developed using frame analysis to examine recent high-profile values statements endorsing ethical design for AI/ML.
  • Husted, B. W., & Allen, D. B. (2000). Is it ethical to use ethics as strategy? In J. Sójka & J. Wempe (Eds.), Business challenging business ethics: New instruments for coping with diversity in international business (pp. 21-31). Springer.
    • This article seeks to define a strategy concept in order to situate the different approaches to the strategic use of ethics and social responsibility found in the current literature. The authors then analyze the ethics of such approaches using both utilitarianism and deontology and end by defining limits to the strategic use of ethics.
  • IEEE Global Initiative (2018). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems.*
    • This paper analyses Google’s Duplex: a computer‐based system with natural language capabilities that provides a human sounding conversation as it performs a set of tasks, and some of the initial reaction to the system and its capabilities. The authors use the applications and characteristics of Duplex to investigate the ethics of pretending to be human and suggest that such impersonation is against evolving computer codes of ethics.
  • Johnson Jr, A. M. (1997). The underrepresentation of minorities in the legal profession: A critical race theorist’s perspective. Michigan Law Review, 1005-1062. https://doi.org/10.2307/1290052*
    • This article discusses the import of the development of Critical Race Theory for the legal profession and larger society and seeks to explore whether Critical Race Theory can have a positive or any effect for those outside legal academia.
  • Johnson Jr, A. M. (2006). The destruction of the holistic approach to admissions: The pernicious effects of rankings. Indiana Law Journal, 81, 309. https://heinonline.org/HOL/P?h=hein.journals/indana81&i=319
    • This article argues that achieving racial and ethnic diversity in the student body of a law school is a laudable and productive end which all law schools and institutions of higher education should seek to achieve. It is written from the perspective that achieving a diverse student body is a positive goal and one that can and should be accomplished through the use of affirmative action.
  • Leslie, D., & Catungal, J. P. (2012). Social justice and the creative city: Class, gender and racial inequalities. Geography Compass, 6(3), 111-122. https://doi.org/10.1111/j.1749-8198.2011.00472.x
    • This paper argues that gender and racial equality is at stake in the creative city. Continuing class inequality is maintained and exacerbated as a result of creativity‐led urban economic development policies.
  • Noordegraaf, M. (2007). From “pure” to “hybrid” professionalism: Present-day professionalism in ambiguous public domains. Administration & Society, 39(6), 761-785. https://doi.org/10.1177%2F0095399707304434*
    • This paper aims to answer the following questions: What is professionalism? What is professional control in ambiguous occupational domains? What happens when different types of occupational control get mixed up? It argues that the solution lies in portraying classic professionalism as “controlled content, transitioning from “pure” to “hybrid” professionalism, and portraying present-day professionalism as “content of control” instead of controlled content.
  • Oz, E. (1993). Ethical standards for computer professionals: A comparative analysis of four major codes. Journal of Business Ethics, 12(9), 709-726. https://doi.org/10.1007/BF00881385*
    • This paper compares and evaluates the ethical codes of four major organizations of computer professionals in America. The authors analyze these ethical codes in context of the following obligations that every professional has: to society, to the employer, to clients, to colleagues, to the professional organization, and to the profession.
  • Panteli, A., Stack, J., & Ramsay, H. (1999). Gender and professional ethics in the IT industry. Journal of Business Ethics, 22(1), 51-61. https://doi.org/10.1023/A:1006156102624*
    • This paper discusses the ethical responsibility of the Information Technology (IT) industry towards its female workforce, particularly the representation of women. The paper presents evidence that the IT industry is not gender-neutral and that it does little to promote or retain its female workforce. Therefore, the authors urge that professional codes of ethics in IT should be revised to take into account the diverse needs of its staff.
  • Rhode, D. L. (1994). Gender and professional roles. Fordham Law Review, 63, 39. https://heinonline.org/HOL/P?h=hein.journals/flr63&i=53*
    • This article, informed by contemporary feminist jurisprudence, discusses the following two issues. First, the challenges to professional roles, relationships, and the delivery of services. Then, issues of gender bias in the workplace and women’s underrepresentation in positions of the greatest power, status, and reward. Both discussions build on values traditionally associated with women that are undervalued in traditionally male-dominated professions.
  • Rhode, D. L. (1997). The professionalism problem. William & Mary Law Review, 39, 283. https://heinonline.org/HOL/P?h=hein.journals/wmlr39&i=295
    • This article argues that given the increasing discontent with the legal profession, particularly in the form of criticism directed at ethical practices that have widened the gap between professional ideals and professional work, the existence of and solution to competing values must be acknowledged and created, as these issues are too significant to continue unmediated.
  • Shapiro, S. P. (1987). The social control of impersonal trust. American Journal of Sociology, 93(3), 623-658. https://doi.org/10.1086/228791
    • This paper discusses the ‘guardians of impersonal trust’ and discovers that they create new problems. Particularly, the resulting collection of procedural norms, structural constraints, entry restrictions, policing mechanisms, social-control specialists, and insurance-like arrangements which increase the opportunities for abuse while it encourages less acceptable trustee performance.
  • Standing, G. (2010). Work after globalization: Building occupational citizenship. Edward Elgar Publishing.
    • In this book, the author seeks to shift emphasis from the role of capital to the creativity of labour in the creation of value in the real economy. A central role is accorded to each and all of the skills and occupations which contribute to the construction of an economy and a civic culture governed by the public interest.
  • Stevens, B. (1994). An analysis of corporate ethical code studies: “Where do we go from here?”. Journal of Business Ethics, 13(1), 63-69. https://doi.org/10.1007/BF00877156
    • This article seeks to differentiate between ethical codes, professional codes, and mission statements. Ethical code studies are then reviewed in terms of how codes are communicated to employees and whether implications for violating codes are discussed. Finally, the authors discuss how such codes are communicated and accepted, and their impact on employees.
  • Boddington, Paula. (2017). Towards a code of ethics for artificial intelligence. 1st ed. Springer International Publishing.
    • In this book, the author investigates how to produce realistic and workable ethical codes or regulations in this rapidly developing field to address immediate and realistic longer-term issues.
  • West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html*
    • This paper shows there is a diversity crisis in the AI sector across gender and race. Thus, the authors argue that the AI industry must acknowledge the gravity of its diversity problem, and admit that existing methods have failed to contend with the uneven distribution of power, and the means by which AI can reinforce such inequality.
  • Wilkins, D. B. (1998). Identities and roles: Race, recognition, and professional responsibility. Maryland Law Review, 57(4), 1502. https://heinonline.org/HOL/P?h=hein.journals/mllr57&i=1514
    • This article argues that issues relating to a lawyer’s non-professional identity for example, gender, race, religion are omitted as motivation for lawyers to uphold the profession’s norms. The article also discusses narratives created in the law profession about the nature of the lawyer’s role, particularly the claim that a lawyer’s non-professional identity is (or at least ought to be) irrelevant to their professional role.

III. Concepts & Issues

Chapter 8. We’re Missing a Moral Framework of Justice in Artificial Intelligence: On the Limits, Failings, and Ethics of Fairness (Matthew Le Bui and Safiya Umoja Noble)⬆︎

  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity. https://www.ruhabenjamin.com/race-after-technology*
    • Using critical race theory, this book analyzes how current technologies can and have reinforced White supremacy and increased social inequalities. The concept of “The New Jim Code” is introduced as a means of describing how a wide range of discriminatory designs can: 1. encode inequity by amplifying racial hierarchies, 2. ignoring and replicating social divisions, and 3. inadvertently reinforcing racial biases while intending to ‘fix’ them. This book concludes with an overview of conceptual strategies, including tech activism and abolitionists tools, that might be used to disrupt and rectify current and future technological design. 
  • Benjamin, R. (2016). Catching our breath: Critical race STS and the carceral imagination. Engaging Science, Technology, and Society, 2, 145-156. https://doi.org/10.17351/ests2016.70
    • This article brings together science and technology studies (STS) scholarship with critical race theory to examine carceral approaches to governing human life. The author argues for an expanded understanding of ‘the carceral’ which includes diverse forms of containment in health and medicine, education and employment, border policies, data practices, and virtual reality. The article concludes with a call for the adoption of abolitionist strategies to foster human agency in relation to science, technology, and innovation.  
  • Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency,149-159. http://proceedings.mlr.press/v81/binns18a.html*
    • This article discusses contemporary issues of fairness and ethics in machine learning and artificial intelligence, arguing that these disciplines have been increasingly formalized around Enlightenment-era philosophies concerning discrimination, egalitarianism, and justice as parts of moral and political philosophy. The author concludes that the historical study of such frameworks can illuminate contemporary framings and assumptions.
  • Browne, S. (2015). Dark matters: On the surveillance of blackness. Duke University Press. https://www.dukeupress.edu/dark-matters
    • This book investigates surveillance practices through the conditions of blackness, showing how contemporary surveillance technologies are informed by historical racial formations, such as the policing of black lives through slavery, branding, runaway slave notices, and lantern laws. The author draws from black feminist theory, sociology, and cultural studies, to describe surveillance as a normalized material and discursive practice that reifies boundaries, bodies, and borders, using racial lines.
  • Bucher, T. (2018). If…Then: Algorithmic power and politics. Oxford University Press. https://www.doi.org/10.1093/oso/9780190493028.001.0001
    • This book investigates the political economy of algorithms and other recently developed informational infrastructures, such as search engines and social media. Arguing that we ‘live algorithmic lives’, the author describes how society is shaped by the political and commercial institutions who design technology. Using case-studies to explore the materially discursive and cultural dimensions of software, the book argues that the most important aspects of algorithms are not in the details, but rather how they are used to define social and political practices.
  • Chun, W.H.K. (2008). Control and freedom power and paranoia in the age of fiber optics. MIT Press. https://mitpress.mit.edu/books/control-and-freedom*
    • This book uses media archeology and visual culture studies to study the current political and technological coupling of freedom and control, by tracing the emergence of the Internet as a mass medium of communication. Deleuze and Foucault are used to ground the analysis of contemporary technologies such as webcams and facial recognition software. The author argues that the relationship between control and power on the Internet is a network, driven by sexuality and race, tracing the origins of governmental regulation online to cyberporn, and concluding that the Internet’s potential for democracy is found in the mutual exposure to others we cannot control.
  • Daniels, J. (2009). Cyber racism: White supremacy online and the new attack on civil rights. Rowman & Littlefield Publishers.*
    • This book explores white supremacy on the Internet, tracing its origins from print to the online era. The author describes ‘open’ and ‘cloaked’ sites in which white supremacist organizations have translated their publications online, interviewing small groups of teenagers as they navigate and attempt to comprehend the content. The author provides an discussion of cyber racism which addresses common assumptions about the inherent democratic nature of the Internet, and its capacity as a recruitment tool for white supremacist groups. The book concludes with an analysis challenging conventions about racial equity, civil rights, and the Internet.
  • Daniels, J., Nkonde, M., & Mir, D. (2019). Advancing racial literacy in tech. Data & Society. https://datasociety.net/library/advancing-racial-literacy-in-tech/
    • In response to growing concerns about a lack of diversity training in the tech industry, this paper presents an overview of racial literacy practices designed for adoption by organizations. The authors discuss the role that tech products, company culture, and supply chain practices play in perpetuating structural racism, as well as strategies for capacity building grounded in intellectual understanding, emotional intelligence, and action.
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.*
    • Considering the historic context of austerity, this book documents the use of digital technologies for distributional decision-making in social service delivery to poor and disadvantaged populations in the United States. Using ethnographic and interview methods, the author investigates the impact of automated systems such as Medicaid and Temporary Assistance for Needy Families, and electronic benefit transfer cards, stating that such systems, while expensive, are often less effective, and regularly reproduce and aggravate bias, equity disparities, and state surveillance of the poor. The author speaks to legacy system prejudice and the ‘social specs’ that underlie our decision-systems and data-sifting algorithms and offers a number of participatory design solutions including empathy through co-design, transparency, access, and control of information.
  • Gandy, O.H. (1993). The panoptic sort: A political economy of personal information. Westview Press. https://doi.org/10.1002/9781444395402.ch20
    • In this book the author describes the political economy of personal information (PI), documenting the various ways in which PI is classified, sorted, stored, and capitalized upon by institutions of power. The author discusses personal privacy in the context of individual autonomy, collective agency, and bureaucratic control, describing these operations as panoptical sorting processes.
  • Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences. https://scholarspace.manoa.hawaii.edu/bitstream/10125/59651/0211.pdf  
    • This paper uses frame analysis to analyze recent high-profile value statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). The authors conclude that vision statements for ethical AI/ML, in their adoption of specific language drawn from critics of the field, have become limited, expert-driven, and technologically deterministic.
  • Hoffmann, A.L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900-915. https://www.doi.org/10.1080/1369118x.2019.1573912*  
    • This article critiques fairness and antidiscrimination efforts in AI, discussing how technical attempts to isolate and remove ‘bad’ data and algorithms tend to overemphasize ‘bad actors’ and ignore intersectional or broader socio technical contributions. The author describes how this leads to reactionary technical solutions that fail to displace the underlying logic that produces unjust hierarchies, thus failing to address justice concerns.
  • Hoffmann, A. L. (2017). Data, technology, and gender: Thinking about (and from) trans lives. Spaces for the Future. Routledge. https://doi.org/10.4324/9780203735657-1
    • This book chapter discusses how data practices have situated and defined gender, with a particular focus on transgendered identity and online discrimination perpetuated by harmful design. The author describes how data-driven platforms are used by many transgendered activists to bring attention to the concerns of minority populations, however these platforms have also been used to promote sexism and gender inequality. 
  • Lewis, T., Gangadharan, S. P., Saba, M., Petty, T. (2018). Digital defense playbook: Community power tools for reclaiming data. Our Data Bodies.*
    • Our Data Bodies is a collaborative project that combines community-based organizing, capacity-building, and academic research focused on how marginalized communities are impacted by data-based technologies. This workbook presents research findings concerning data, surveillance, and community safety, and includes education activities using co-creation methods and tools towards data justice and data access for equity.
  • McIlwain, C. (2017). Racial formation, inequality and the political economy of web traffic. Information, Communication & Society, 20(7), 1073–1089. https://doi.org/10.1080/1369118X.2016.1206137
    • Using racial formation theory, this article reviews how race is represented and systematically reproduced on the Internet. The author uses an original dataset and network graph to document the architecture of web traffic, including traffic patterns among and between race-based websites. The study finds that web producers create hyperlinked networks that guide users to websites without consideration of racial or nonracial content, indicating the presence of race-based hierarchies of weighted values, influence and power.
  • Mills, C.W. (2017). Black rights/white wrongs: The critique of racial liberalism. Oxford University Press.*
    • This book of essays focuses on racial liberalism from a historical perspective, reconceptualizing justice and fairness in the ways in which they reimagine social structures, without being limited to individualistic moral virtuosity. The author remarks on the centrality of the exclusion of liberalism in many documents and declarations and supplants liberalism’s classical individualistic social ontology for one that includes class, gender, and race.
  • Noble, S.U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://nyupress.org/9781479837243/algorithms-of-oppression/*
    • This book discusses how search engines, such as Google, are embedded with racial and sexist bias, challenging the notion that they are neutral algorithms acting outside of influence from their human engineers, and emphasizing the greater social impacts created through their design. Through an analysis of text and media searches, and research on paid advertising, the author argues that the monopoly status of a small group of companies alongside vested private interests in promoting some sites over others, has led to biased search algorithms that privilege whiteness and exhibit bias against people of colour, particularly women.
  • Pasquale, F. (2016). The black box society: The secret algorithms behind money and information. Harvard University Press.*
    • This book explores the social and economic impacts of developing information practices, namely the influx of ‘big data’. The author discusses how these practices have both benefited society through innovations in health care, while also causing significant disruptions to social equity, e.g. the subprime mortgage crisis of 2009. The author attributes these negative impacts to improper use of algorithms and concludes the book with several recommendations for how they might be corrected.
  • Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy. Oxford University Press.*
    • This book focuses on the rise and socio-political impacts of the contemporary social media platform Facebook. The author discusses the consequences of Facebook’s dominance, including the ways in which user behaviour is tracked and shaped through the platform’s multifaceted operations, addressing how these practices have impacted global democratic processes such as national elections.
  • Zook, M., Barocas, S., Boyd, D., Crawford, K., Keller, E., Gangadharan, S.P., Goodman, A., Hollander, R., Koenig, B.A., Metcalf, J., Narayanan, A., Nelson, A. & Pasquale, F. Ten simple rules for responsible big data research. PLOS Computational Biology, 13(3). https://doi.org/10.1371/journal.pcbi.1005399
    • Acknowledging the growing size and availability of big data to researchers, the authors of this paper stress the importance of adopting ethical principles when working with large datasets, particularly as research agendas more beyond typical computational and natural sciences to include those involving human behaviour, interaction and health. The paper outlines ten basic principles which focus on recognizing the human participants and complex systems contained within the datasets, making ethical questioning a part of standard workflow.

Chapter 9. Accountability in Computer Systems (Joshua A. Kroll)⬆︎

  • Andrews, L. (2019). Public administration, public leadership and the construction of public value in the age of the algorithm and ‘big data’. Public Administration97(2), 296-310. https://doi.org/10.1098/rsta.2018.0080
    • This paper is an introduction to a special issue of the journal titled : ‘Governing artificial intelligence: ethical, legal and technical opportunities and challenges.’ It outlines recent developments in AI governance, and examines how the ethical frameworks are set, and providing suggestions to further the discourse on AI policy.
  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671-732. http://dx.doi.org/10.15779/Z38BG31
    • Barocas and Selbst argue that algorithmic techniques such as data mining are only as effective as the data imported to the system, and that blind reliance on these systems may perpetuate discrimination. Further, these biases are not intentionally incorporated into the machine, making the source of discrimination difficult to present to a court. The article examines these concerns in light of American anti-discrimination law.
  • Breaux, T. D., Vail, M. W., & Anton, A. I. (2006). Towards regulatory compliance: Extracting rights and obligations to align requirements with regulations. In 14th IEEE International Requirements Engineering Conference (RE’06). 49-58. IEEE.*
    • This article argues that current regulations that prescribe stakeholder rights and obligations that must be satisfied by software systems are inadequate because they are extremely ambiguous. Fields such as healthcare that are typically highly regulated require a more sophisticated system. The article presents a model for extracting and prioritizing rights and obligations and applies it to the U.S. Health Insurance Portability and Accountability Act.
  • Desai, D. R., & Kroll, J. A. (2017). Trust but verify: A guide to algorithms and the law. Harvard Journal of Law & Technology31(1), 1-64.*
    • This article examines the problem of the potential for algorithms to be designed to create outcomes that are incompatible with what society prohibits, and remain undetectable because of the complexity of their design. The authors challenge the common solution proposed for this problem, algorithmic transparency, arguing that calls for transparency are not compatible with computer science. Instead, the article presents an alternative to transparency by proving recommendations on regulation of public and private sector use of software.
  • Du, M., Liu, N., & Hu, X. (2019). Techniques for interpretable machine learning. Communications of the ACM63(1), 68-77. https://www.dx.doi.org/10.1145/3359786
    • This report argues that concerns about the black box nature of algorithmic systems have limited their use in society. It provides key insights in interpretability, and argues that interpretable machine learning will solve the problem of limited application.
  • Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke L. & Tech. Rev.16, 18. https://www.doi.org/10.31228/osf.io/97upg
    • This article argues that a right to an explanation as present in the EU General Data Protection Regulation is unlikely to remedy problems of unfairness in machine learning algorithms. The article proposes that a solution to algorithmic bias might be found in other parts of the GDPR,  such as the right to erasure.
  • Feigenbaum, J., Jaggard, A. D., Wright, R. N., & Xiao, H. (2012). Systematizing “accountability” in computer science. Technical Report YALEU/DCS/TR-1452, Yale University.*
    • This report provides a systematization of approaches to accountability that have been taken in computer science research. The report categorizes these approaches along the axes of time, information, and action; within each of these axes, and identifies multiple questions of interest. The report’s systematization contributes an articulation of the definitions that have been used in computer science (sometimes only implicitly); it also contributes a perspective on how these different approaches are related.
  • Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. University of Pennsylvania Law Review.165, 633.*
    • This article challenges the dominant position in legal literature that transparency will solve the problems of incorrect, unjustified or unfair results of algorithmic decision-making. The article argues that technology is creating new opportunities, subtler and more and more flexible than total transparency, to design algorithms so that they better align with legal and policy objectives.
  • Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical transactions of the royal society a: Mathematical, physical and engineering sciences376(2133), 20180084. https://doi.org/10.1098/rsta.2018.0084*
    • This paper argues that, contrary to the criticism that mysterious, unaccountable black-box software systems threaten to make the logic of critical decisions inscrutable, algorithms are fundamentally understandable pieces of technology. The paper investigates the contours of inscrutability and opacity, the way they arise from power dynamics surrounding software systems, and the value of proposed remedies from disparate disciplines, especially computer ethics and privacy by design. It concludes that policy should not accede to the idea that some systems are of necessity inscrutable. 
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence267, 1-38.*
    • This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings and discusses ways that these can be infused with work on explainable artificial intelligence.
  • Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency (pp. 279-288).
    • This article analyzes the increased focus on building simplified models that help to explain how artificial intelligence machines make decisions. It then compares how models and their explanations are distinguished in the fields of sociology and philosophy. Finally, they argue that the creation of models may not be necessary, and instead, a broader approach could be utilized.
  • Molnar, C. (2019). Interpretable machine learning. Leanpub.
    • This book provides a guide for making black box models explainable to the average person. It provides an overview of the concept of interpretability, and outlines simple interpretable models. Then, the books discusses methods for interpreting black box models. 
  • Nissenbaum, H. (1996). Accountability in a computerized society. Science and engineering ethics2(1), 25-42.*
    • This essay warns of eroding accountability in computerized societies and argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, the article identifies four barriers in particular: 1) the problem of many hands, 2) the problem of bugs, 3) blaming the computer, and 4) software ownership without liability. The paper concludes with ideas on how to reverse this trend.
  • Pearson, S. (2011). Toward accountability in the cloud. IEEE Internet Computing15(4), 64-69.
    • This article suggest that accountability will become a central concept in the cloud, and in new mechanisms meant to increase trust in cloud computing. The article then argues that a contextual approach must be applied, and a one-size fits all system avoided.
  • Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. AI Now Institute.*
    • This report proposes an Algorithmic Impact Assessment (AIA) framework designed to support affected communities and stakeholders as they seek to assess the claims made about these systems, and determine where and if their use is acceptable. The report outlines the 5 key elements of the framework, and argues that implementing this framework will help public agencies achieve 4 key policy goals.
  • Renda, A. (2019). Artificial intelligence: Ethics, governance and policy challenges. CEPS Task Force Report. http://aei.pitt.edu/id/eprint/97038
    • This report examines whether or not AI will make the world better, arguing that we have a unique opportunity to shape policy choices. The report consolidates the findings of the CEPS task report into a single document. 
  • Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society4(2), 2053951717736335. https://doi.org/10.1177%2F2053951717736335
    • This paper argues that just as a conception of justice is needed for rule of law, so too is the need for the establishment of data justice. Data justice would require fairness in the way people are represented as a result of digital data production. Taylor proposes three pillars of international data justice.
  • Veale, M., & Brass, I. (2019). Administration by algorithm? Public management meets public sector machine learning. Public Management Meets Public Sector Machine Learning. https://ssrn.com/abstract=3375391
    • This article examines the recent push to use administrative data to build algorithms, in order to assist with day-to-day operations. It addresses several key questions relating to this, such as what drives these new approaches, and how public management decisions are different when machine learning techniques are used for public service. The article analyzes his from different levels of government, and maps out current efforts to standardize machine learning in the public sector.
  • Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 494.*
    • This article argues that Big Data analytics and artificial intelligence tend to make non-intuitive and unverifiable inferences about individual people. Big Data and AI rely on data of questionable value, which creates new opportunities for discrimination. The legal status of these decisions is also contended. Wachter and Mittelstadt propose a new legal right to address this problem, a data protection right to reasonable inferences.
  • Weitzner, D. J., Abelson, H., Berners-Lee, T., Feigenbaum, J., Hendler, J., & Sussman, G. J. (2007). Information accountability. Technical Report MIT-CSAIL-TR-2007-034, MIT.*
    • This paper argues that debates over online privacy, copyright, and information policy questions have been overly dominated by the access restriction perspective. The paper proposes an alternative to the “hide it or lose it” approach that currently characterizes policy compliance on the Web. The alternative proposed is to design systems that are oriented toward information accountability and appropriate use, rather than information security and access restriction.

Chapter 10. Transparency (Nicholas Diakopoulos)⬆︎

  • Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, & Human Values41(1), 93-117.*
    • This paper develops a definition of networked information algorithms (NIAs) as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action. Starting from Merrill’s prompt to see ethics as the study of “what we ought to do,” the paper examines ethical dimensions of contemporary NIAs. Specifically, the paper develops an empirically grounded, pragmatic ethics of algorithms, through tracing an algorithmic assemblage’s power to convene constituents, suggest actions based on perceived similarity and probability, and govern the timing and timeframes of ethical action.
  • Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society20(3), 973-989.*
    • This article critically interrogates the ideal of transparency, tracing some of its roots in scientific and sociotechnical epistemological cultures and presents 10 limitations to its application. The article argues that transparency is inadequate for understanding and governing algorithmic systems and sketches an alternative typology of algorithmic accountability grounded in constructive engagements with the limitations of transparency ideals.
  • Blacklaws, C. (2018). Algorithms: Transparency and accountability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2128). https://doi.org/10.1098/rsta.2017.0351
    • This opinion piece explores the issues of accountability and transparency in relation to the growing use of machine learning algorithms. Citing the recent work of the Royal Society and the British Academy, it looks at the legal protections for individuals afforded by the EU General Data Protection Regulation and asks whether the legal system will be able to adapt to rapid technological change. It concludes by calling for continuing debate that is itself accountable, transparent and public.
  • Brkan, M. (2019). Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond. International journal of law and information technology27(2), 91-121.
    • The purpose of this article is to analyze the rules of the General Data Protection Regulation (GDPR) and the Directive on Data Protection in Criminal Matters on automated decision-making and to explore how to ensure transparency of such decisions, in particular those taken with the help of algorithms. While the Directive on Data Protection in Criminal Matters does not seem to give the data subject the possibility to familiarize herself with the reasons for such a decision, the GDPR obliges the controller to provide the data subject with ‘meaningful information about the logic involved’ (Articles 13(2)(f), 14(2)(g) and 15(1)(h)), thus raising the much-debated question whether the data subject should be granted a ‘right to explanation’ of the automated decision. This article goes beyond the semantic question of whether this right should be designated as the ‘right to explanation’ and argues that the GDPR obliges the controller to inform the data subject of the reasons why an automated decision was taken. 
  • Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2133). https://doi.org/10.1098/rsta.2018.0080*
    • This paper is the introduction to the special issue entitled “Governing artificial intelligence: ethical, legal and technical opportunities and challenges.” The issue addresses how AI can be designed and governed to be accountable, fair and transparent. Eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems.
  • Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review89(1), 1-35.*
    • This paper argues that procedural regularity is essential for those stigmatized by artificially intelligent scoring systems and that the American due process tradition should inform basic safeguards in this regard. It argues that regulators should be able to test scoring systems to ensure their fairness and accuracy and that individuals should be given meaningful opportunities to challenge adverse decisions based on scoring systems.
  • de Fine Licht, J. (2014). Magic wand or Pandora’s Box? How transparency in decision making affects public perceptions of legitimacy. University of Gothenburg.
    • This dissertation identifies four main mechanisms that might explain positive effects of transparency on public acceptance and trust: that transparency enhances policy decisions, which indirectly makes people more trusting; that transparency is generally perceived to be fairer than secrecy; that transparency increases public understanding of decisions and decision makers; and that transparency increases the public feelings of accountability. The dissertation builds on five scenario-based experiments, with each study manipulating different degrees and versions of transparency for individual policy level decisions. The dissertation concludes that transparency might have the power to increase public perceptions of legitimacy, but also that the effect is more complex than often presumed. 
  • de Fine Licht, K., & de Fine Licht, J. (2020). Artificial intelligence, transparency, and public decision-making. AI & Society. https://doi.org/10.1007/s00146-020-00960-w
    • This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. The paper argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.
  • Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism5(7), 809-828.*
    • This research presents a focus group study that engaged 50 participants across the news media and academia to discuss case studies of algorithms in news production and elucidate factors that are amenable to disclosure. The results indicate numerous opportunities to disclose information about an algorithmic system across layers such as the data, model, inference, and interface. The authors argue that the findings underscore the deeply entwined roles of human actors in such systems as well as challenges to adoption of algorithmic transparency including the dearth of incentives for organizations and the concern for overwhelming end-users with a surfeit of transparency information.
  • Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital journalism3(3), 398-415.*
    • This paper studies the notion of algorithmic accountability reporting as a mechanism for elucidating and articulating the power structures, biases, and influences that computational artifacts exercise in society. The paper proffers a framework for algorithmic power based on autonomous decision-making and motivates specific questions about algorithmic influence. The article analyzes five cases of algorithmic accountability reporting involving the use of reverse engineering methods in journalism to provide insight into the method and its application in a journalism context.
  • Alloa, E. (2018). Transparency: A magic concept of morality. In E. Alloa, & D. Thomä (Eds), Transparency, Society and Subjectivity: Critical Perspectives, (pp. 31–32). Palgrave Macmillan.
    • This book critically engages with the idea of transparency whose ubiquitous demand stands in stark contrast to its lack of conceptual clarity. The book carefully examines this notion in its own right, traces its emergence in Early Modernity and analyzes its omnipresence in contemporary rhetoric. 
  • Fox, J. (2007). The uncertain relationship between transparency and accountability. Development in practice17(4-5), 663-671.*
    • This article questions the widely held assumption that transparency is supposed to generate accountability. It argues that transparency mobilizes the power of shame, yet the shameless may not be vulnerable to public exposure; truth often fails to lead to justice. After exploring different definitions and dimensions of the two ideas, the article instead focuses on the question of kinds of transparency lead to what kinds of accountability, and under what conditions? It concludes by proposing that the concept can be unpacked in terms of two distinct variants; transparency can be either ‘clear’ or ‘opaque’, while accountability can be either ‘soft’ or ‘hard’.
  • Fung, A., Graham, M., & Weil, D. (2007). Full disclosure: The perils and promise of transparency. Cambridge University Press.*
    • Based on a comparative analysis of eighteen major targeted transparency policies, the authors suggest that transparency policies often produce information that is incomplete, incomprehensible, or irrelevant to the consumers, investors, workers, and community residents who could benefit from them. The authors present that transparency sometimes fails because those who are threatened by it form political coalitions to limit or distort information. The authors argue that to be successful, transparency policies must place the needs of ordinary citizens at center stage and produce information that informs their everyday choices.
  • Fenster, M. (2015). Transparency in search of a theory. European Journal of Social Theory18(2), 150-167.
    • This article argues that transparency is best understood as a theory of communication that excessively simplifies and thus is blind to the complexities of the contemporary state, government information, and the public. Taking them fully into account, the article argues, should lead us to question the state’s ability to control information, which in turn should make us question not only the improbability of the state making itself visible, but also the improbability of the state keeping itself secret.
  • Flyverbom, M. (2019). The digital prism. Cambridge University Press.
    • This book shows how the management of our digital footprints, visibilities and attention is a central force in the digital transformation of societies and politics. Seen through the prism of digital technologies and data, the lives of people and workings of organizations take new shapes in our understanding. In order to make sense of these, the book argues that we push beyond common ways of thinking about transparency and surveillance and look at how managing visibilities is a central but overlooked phenomenon that influences how people live, how organizations work and how societies and politics operate.
  • Hansen, H. (2015). Numerical operations, transparency illusions and the datafication of governance. European Journal of Social Theory, 18(2), 203–220.
    • This article analyzes the forms of transparency produced by the use of numbers in social life. It examines what it is about numbers that often makes their ‘truth claims’ so powerful, investigates the role that numerical operations play in the production of retrospective, real-time and anticipatory forms of transparency in contemporary politics and economic transactions, and discusses some of the implications resulting from the increasingly abstract and machine-driven use of numbers. It argues that the forms of transparency generated by machine-driven numerical operations open up for individual and collective practices in ways that are intimately linked to precautionary and pre-emptive aspirations and interventions characteristic of contemporary governance.
  • Hood, C. (2010). Accountability and transparency: Siamese twins, matching parts, awkward couple? West European Politics, 33, 989–1009.
    • This paper contrasts three possible ways of thinking about the relationship between accountability and transparency as principles of governance: as ‘Siamese twins’, not really distinguishable; as ‘matching parts’ that are separable but nevertheless complement one another smoothly to produce good governance; and as ‘awkward couple’, involving elements that are potentially or actually in tension with one another. It then identifies three possible ways in which we could establish the accuracy or plausibility of each of those three characterisations.
  • Meijer, A., Bovens, M., & Schillemans, T. (2014). Transparency. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford Handbook of Public Accountability. Oxford University Press.
    • This chapter opens up the “black box” of the relation between transparency and accountability by examining the expanding body of literature on government transparency. Three theoretical relations between transparency and accountability are identified: transparency facilitates horizontal accountability; transparency strengthens vertical accountability; and transparency reduces the need for accountability. Reviewing studies into the relation between transparency and accountability, this chapter argues that under certain conditions and in certain situations, transparency may contribute to accountability: transparency facilitates accountability when it actually presents a significant increase in the available information, when there are actors capable of processing the information, and when exposure has a direct or indirect impact on the government or public agency.
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society3(2). https://doi.org/10.1177/2053951716679679*
    • This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. Finally, it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
  • Turilli, M., & Floridi, L. (2009). The ethics of information transparency. Ethics and Information Technology11(2), 105-112.*
    • The paper argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles, offering new definition of transparency in order to take into account the dynamics of information production and the differences between data and information. The paper further defines the concepts of “heterogeneous organization” and “autonomous computational artefact” in order to clarify the ethical implications of the technology used in implementing information transparency. It argues that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations in order to support their ethical standing.
  • Westbrook, L., Pera, A., Neguriţă, O., Grecu, I., & Grecu, G. (2019). Real-time data-driven technologies: Transparency and fairness of automated decision-making processes governed by intricate algorithms. Contemporary Readings in Law and Social Justice11(1), 45-50.
    • This paper employs recent research results covering real-time data-driven technologies to perform an analysis and make estimates regarding % of Facebook users who say they think users have no/a little/a lot of control over the content that appears in their newsfeed and % of social media users who say it is acceptable for social media sites to use data about them and their online activities to recommend events in their area/recommend someone they might want to know/show them ads for products and services/show them messages from political campaigns (by age group). This research paper uses structural equation modeling analyze the collected data.
  • Zerilli, J., Knott, A., Maclaurin, J. et al. (2019). Transparency in algorithmic and human decision-making: Is there a double standard? Philosophy & Technology, 32, 661–683. https://doi.org/10.1007/s13347-018-0330-6
    • This paper reviews evidence demonstrating that much human decision-making is fraught with transparency problems, shows in what respects AI fares little worse or better, and argues that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The article asserts that demands of practical reason require the justification of action to be pitched at the level of practical reason, and decision tools that support or supplant practical reasoning should not be expected to aim higher than this. This paper casts this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argues that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. 

Chapter 11. Responsibility and Artificial Intelligence (Virginia Dignum)⬆︎

  • Ashrafian, H. (2015). Artificial intelligence and robot responsibilities: Innovating beyond rights. Science and Engineering Ethics21(2), 317-326. https://doi.org/10.1007/s11948-014-9541-0
    • This paper aims to examine AI rights beyond the context of commensurate responsibilities and duties, using philosophical perspectives. Comparisons to arguments surrounding the moral rights of animals, are made. AI rights are also analyzed in regard to legal principles. Ashrafian argues that core tenants of humanity should be promoted in the development of AI rights.
  • Boden, M. Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., Newman P., Parry, V., Pegman, G., Rodden, T., Sorrell, T., Wallis, M., Whitby, B., & Winfield, A. (2017). Principles of robotics: Regulating robots in the real world. Connection Science, 29(2), 124–129. https://doi.org/10.1080/09540091.2016.1271400*
    • This article outlines a framework of five ethical principles and seven high level messages for responsible robotics.
  • Brożek, B., & Jakubiec, M. (2017). On the legal responsibility of autonomous machines. Artificial Intelligence and Law25(3), 293-304. https://doi.org/10.1007/s10506-017-9207-8
    • This article examines the question of whether autonomous machines can be seen as agents who have legal responsibility. The authors argue that although possible, these machines should not be granted the status of legal agents, at least at their current stage of development.
  • Chockler, H., & Halpern, J. Y. (2004). Responsibility and blame: A structural-model approach. Journal of Artificial Intelligence Research22(1), 93-115. https://www.aaai.org/Papers/JAIR/Vol22/JAIR-2204.pdf
    • This article argues for the extension of the definition of causality to include the notion of degree of responsibility. The authors outline the concept of degree of blame, which accounts of the epistemic state of a given agent in a causal chain. They argue that degree of responsibility can act as a rough indicator for degree of blame.
  • Cranefield, S., Oren, N., & Vasconcelos, W. W. (2018). Accountability for practical reasoning agents. In International Conference on Agreement Technologies (pp. 33-48). Springer. https://doi.org/10.1007/978-3-030-17294-7_3
    • This article begins by discussing the concept of “accountable autonomy” in light of the rise of practical reasoning AI, considering research from a range of fields including public policy, health, and management to clarify the term. The article moves on to provide a list of requirements for accountable autonomous agents and provides potential research questions that could result from these requirements. The authors conclude by proposing the formulation of responsibility as a new core feature of accountability. 
  • Dignum, V. (2017). Responsible autonomy. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI’2017) (pp. 4698–4704). https://doi.org/10.24963/ijcai.2017/655*
    • This article discusses leading ethical theories for ensuring ethical behavior by artificial intelligence systems and proposes alternatives to the traditional methods. Dignum argues that there must be methodologies employed to uncover values of both designers and stakeholders in order to create understanding and trust for AI systems.
  • Dignum, V. (2018). Ethics in artificial intelligence: introduction to the special issue. Ethics and Information Technology, 20, 1–3. https://doi.org/10.1007/s10676-018-9450-z*
    • This introduction provides an overview on the ethical impact of artificial intelligence, briefly summarizing the aims of the papers contained in the special issue.
  • Dignum, V. (2019). Responsible Artificial Intelligence: How to develop and use AI in a responsible way. Springer International Publishing.
    • Dignum considers the implications of AI’s rise in traditional social structures, including issues of integrity surroundings those who build and operate AI. Dignum also provides an overview of related work and further reading in the field of ethical issues in modern algorithmic systems.
  • Dodig-Crnkovic, G., & Persson, D. (2008). Sharing moral responsibility with robots: A pragmatic approach. In P. K. Holst & P. Funk (Eds.), Frontiers in Artificial Intelligence and Applications Volume 173. IOS Press Books. https://doi.org/10.3233/978-1-58603-867-0-165
    • This article outlines an approach to roboethics that argues for moral responsibility of AI as a pragmatic, social regulatory mechanism. Because individual artificial intelligences perform tasks differently, they can in some sense be responsible for outcomes. The authors argue that the development of this social regulatory mechanism requires ethical training for engineers as well as democratic debate on what is best for society.
  • Eisenhardt, K. M. (1989). Agency theory: An assessment and review. The Academy of Management Review, 14(1), 57–74. http://www.jstor.org/stable/258191?origin=JSTOR-pdf.*
    • This paper provides a definition and analysis of agency theory. Eisenhardt makes two conclusions. First, that agency theory provides insight into information systems, outcome uncertainty, incentives, and risk. Second, that agency theory has empirical value, especially when used with complementary perspectives. Eisenhardt recommends that agency theory be used to combat problems stemming from cooperative structures.
  • Floridi, L. Should we be afraid of AI? (2016). Aeon Essays.*
    • This essay addresses concerns expressed by tech CEOs and consumers alike, that the development of super-intelligent AI could spell disaster for the human race. Current reality is much more trivial, with AI merely absorbing what is put in by humans. Floridi argues that we need to focus on concrete problems with AI, rather than sci-fi scenarios.
  • Floridi, L., & Sanders, J. (2004).  On the morality of artificial agents. Minds and machines, 14(3) 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d*
    • This article offers a definition of the term agent, and highlights the concerns and responsibilities attributed to different types of agents, particularly artificial agents. The authors conclude by arguing that there in room in computer ethics for the concept of a moral agent that lacks free will, mental states, and/or responsibility.
  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R.,  Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689-707. https://doi.org/10.1007/s11023-018-9482-5*
    • This article discusses the findings of AI4People, a study which aimed to lay the foundations for a Good AI society. The authors introduce core opportunities and drawbacks for AI society, laying out five ethical principles that should be considered in AI development. They also offer 20 recommendations for the assessment, development, and incentivizing the creation of good AI. 
  • Gotterbarn, D.W., Bruckman A., Flick, C.,  Miller, K., & Wolf, M. J. (2018). ACM code of ethics: A guide for positive action. Communications of the ACM, 61(1), 121-128.*
    • This article provides the first update on the Association for Computing Machinery’s code of ethics since 2003, incorporating feedback from email, focus groups, and workshops. This update is significant, as some principles from the 2003 version were removed entirely, and new principles added.
  • Leikas, J., Koivisto, R., & Gotcheva, N. (2019). Ethical framework for designing autonomous intelligent systems. Journal of Open Innovation: Technology, Market, and Complexity, 5(1), 18. https://doi.org/10.3390/joitmc5010018
    • This article reviews existing ethical principles and analyzes them in terms of their application to artificial intelligence. It then presents an original ethical framework for AI design.
  • Pelea, C. I. (2019). The relationship between artificial intelligence, human communication and ethics. A futuristic perspective: Utopia or dystopia? Media Literacy and Academic Research2(1), 38-48.
    • This article examines the question of whether and to what extent our social parameters of communication will need to be re-drawn because of the rise of artificial intelligence. Pelea first discusses how humans and AI communicate on an individual level. Second, she investigates the collective social anxiety surrounding the rise of AI and the ethical dilemmas this creates. Pelea argues that it is vital that we undertake the challenge of creating a culture of social responsibility surrounding AI.
  • Russell, S. & Norvig, P. (2009). Artificial intelligence: A modern approach. 3rd. edition. Pearson Education.*
    • This textbook provides an introduction to the theory and practice of artificial intelligence that is comprehensive and up to date.
  • Stone, P., Brooks R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberg, J., Kalyanakrishnan, S., Kamar, E., Kraus, S., Leyton-Brown, K., Parkes, D., Press, W., Saxenian, A., Shah, J., Tambe, M., & Teller, A. (2016). Report of the 2015-2016 Study Panel. Stanford University.*
    • The 2014 launched One Hundred Year Study on Artificial Intelligence aims to provide a long-term investigation into AI and its effect on social groups and society at large. This is the first study to come out of the project and discusses ways to frame the project in light of recent advances in AI technology, specifically in the public sector.
  • Saariluoma, P., & Leikas, J. (2019). Ethics in designing intelligent systems. International Conference on Human Interaction and Emerging Technologies, 1018, 47-52. Springer. https://doi.org/10.1007/978-3-030-25629-6_8
    • Hume’s guillotine, which argues that one can never derive values from facts, suggests that artificial intelligence systems can never be ethical, as they operate based on facts. The authors argue that Hume’s distinction between facts and values is not well founded, as ethical systems are composed of rules meant to guide actions, which act as a combination of both facts and values. While machines can be built to process ethical information, the authors argue that human input is still vital at this point in time.  
  • Turiel, Elliot. (2002) The culture of morality: Social development, context, and conflict. Cambridge University Press.*
    • Turiel challenges the common view that extreme individualism and a subsequent lack of community involvement are responsible for the moral crisis in American society, drawing on research from developmental psychology, anthropology, and sociology. Turiel argues that each subsequent generation has attributed decline in society to the actions of young people.

Chapter 12. The Concept of Handoff as a Model for Ethical Analysis and Design (Deirdre K. Mulligan and Helen Nissenbaum)⬆︎

  • Akrich, M., & Latour, B. (1992). A summary of a convenient vocabulary for the semiotics of human and nonhuman assemblies. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change (pp. 259-264). MIT Press.*
    • Structured as a dictionary list illuminated by examples, this article provides a comprehensive semiotic vocabulary for engagement with the topic of human and non-human assemblies. The authors explore the continuum between human and non-human through the description of all as actants, placed into specific categories by framing paradigms. Particular emphasis is placed on the role of observer, context, and perspective in subjective understandings of object, relation, interaction, function, and purpose. 
  • Borenstein, J., & Arkin, R. (2016). Robotic nudges: The ethics of engineering a more socially just human being. Science and Engineering Ethics22(1), 31-46.
    • This paper engages with the ethics of “nudge” interactions between human actors and autonomous agents, and whether it is permissible to design these machines to promote “socially just” tendencies in humans. Employing a Rawlsian “principles of justice” framework, the authors explore arguments for and against nudges more broadly, and act specifically to analyze whether robotic nudges are morally or practically different from other kinds of decision architecture. They also put forth ethical principles for those seeking to design such systems.
  • Brownsword, R. (2011). Lost in translation: Legality, regulatory margins, and technological management. Berkeley Technology Law Journal26(3), 1321-1365.*
    • This article discusses the role of regulation and the law in the translation from a traditional legal order (wherein participants can act in a multitude of ways but are normatively constrained by legal rules) to a “technologically managed” order (wherein individuals are restricted to certain actions by the nature of the technology used to carry out those actions). The topic is explored through the lenses of a shift on the part of the regulated party from “moral” to “prudential” motivations for action, and further a shift on the part of the regulation from normative to non-normative purpose.
  • Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103(3), 513–563.
    • This article engages in extensive exploration of the potential implications of cyberlaw and the regulation of the internet on law for and regulation of artificially intelligent robots. The author examines robotics as an “exceptional” technology with the potential to qualitatively and quantitatively shift sociotechnical contexts. He argues that the discipline of cyberlaw (developed in response to the similarly “exceptional” technology of the internet) provides essential insights for responding to the challenges that robots bring forth.
  • Cohen, J. E. (2006). Pervasively distributed copyright enforcement. Georgetown Law Journal95(1), 1-48.
    • This article discusses the impact of strategies of “pervasively distributed copyright enforcement”, whereby intellectual property rights holders seek to embed intellectual property enforcement functions within foundational communications networks, protocols, and devices. The author characterizes these attempts as a “hybrid regime” that neither completely aligns with centralized authority nor with distributed internalized norms and explores the observed and potential impacts of this on networked society. 
  • Coglianese, C., & Lehr, D. (2016). Regulating by robot: Administrative decision making in the machine-learning era. Georgetown Law Journal105(5), 1147-1224.
    • This paper engages in critical legal and ethical analysis of the present and future role of machine-learning algorithms in decision-making by administrative bodies. The authors examine constitutional and administrative law challenges to the role of autonomous agents in this context and conclude that the use of such agents is likely to be legal but will only be ethical if certain important principles are adhered to.
  • Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society5, 40-60. https://doi.org/10.17351/ests2019.260
    • This paper explores the balance of ethical weight within sociotechnical systems through the concept of a “moral crumple zone.” This refers to human actors with ostensible authority (but little meaningful power) over a complex human-machine system who are set up to take disproportionate individual responsibility for failings in systemic structure and design. The author develops this concept by analyzing several high-profile accidents, their antecedent systemic structures, and the subsequent media portrayals of the actors involved.
  • Flanagan, M., & Nissenbaum, H. (2014). Values at play in digital games. MIT Press.*
    • This book comprises a guide to the value-sensitive conception and design of digital games. The authors seek to develop a theoretical and practical framework for critically identifying the moral and political values embedded within games. They seek further to offer guidance to game designers who seek to ensure that particular values are incorporated within digital games they create.
  • Friedman, B. (1996). Value-sensitive design. Interactions3(6), 16-23.*
    • This article engages with the argument that values are always both embedded within and emergent from the ways in which tools are built and used. The authors advocate subsequently for principles of “value-sensitive design” wherein designers are explicitly called upon to engage actively and thoughtfully with these values and their implications. The topics of user autonomy and system bias are used as the primary case studies for exploring the concept.
  • Friedman, B., Hendry, D. G., & Borning, A. (2017). A survey of value sensitive design methods. Foundations and Trends in Human–Computer Interaction11(2), 63-125.*
    • This article comprises a broad theoretical and methodological discussion of “value sensitive design” alongside a specific survey of 14 different methods for actualizing the concept.  The authors seek to evaluate each method for its role and usefulness in engaging with a particular aspect of “value sensitive design” in practice, as well as to offer general insights about the core characteristics of the concept of “value sensitive design” overall.
  • Joh, E. E. (2016). Policing police robots. UCLA Law Review Discourse64, 516-543.
    • Through legal and ethical lenses, this paper examines the potential impacts of artificially intelligent robots on policing. The author analyzes arguments in favor of and against the adoption of robots by police agencies, and she argues that considering the case study of these technologies raises deeper questions about police decision-making at large that have not yet been systematically addressed in an effective fashion.
  • Latour, B. (1992). Where are the missing masses? The sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change, (pp. 225-258). MIT Press.*
    • This chapter engages with the “technological determinism/social constructivism dichotomy” through the concept of the “actor network approach.” This approach seeks to emphasize the bidirectionality of the interactions between social actors and technological actors in sociotechnical systems, arguing that physical structure and design of the material world acts to shape and limit the boundaries of its social construction. With a focus upon “mundane artifacts,” the author explores the ways in which technologies act to influence the thoughts and decisions of human actors.
  • Lessig, L. (2009). Code: And other laws of cyberspace. Basic Books.*
    • This book engages in a comprehensive discussion of the structure and regulation of the internet, with a focus upon the impact of the four forces of “Law, Norms, Market, and Architecture”. In particular, the author argues that the computer code which defines the structure and function of the internet acts to shape and regulate the conduct of its users in much the same way that traditional regulatory instruments such as legal codes do.
  • Radin, M. (2004). Regulation by contract, regulation by machine. Journal of Institutional and Theoretical Economics, 160(1), 142-156.*
    • The article concerns the impacts of mass standardized contracts and digital rights management systems on how property and contract law regulate intellectual property. The author examines the impacts of these technologies on the underlying knowledge-generation incentives of intellectual property, on the distinction between waivable rules and inalienable entitlements, and on the role of legislative approval of “regulation by machine”.
  • Schaub Jr, G. (2019). Controlling the autonomous warrior: Institutional and agent-based approaches to future air power. Journal of International Humanitarian Legal Studies10(1), 184-202.
    • Working through both institution-centric and agent-centric lenses, this article engages with the legal and ethical challenges posed by the handoff of lethal power to increasingly autonomous weapons systems. The author argues that artificial intelligence is not unprecedented in its ability to change the structure of warfare and contends that past work in understanding the ethical and legal relationships between principals and agents may be effectively adapted to characterizing and addressing these new challenges.
  • Shilton, K., Koepfler, J. A., & Fleischmann, K. R. (2014, February). How to see values in social computing: Methods for studying values dimensions. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing (pp. 426-435).*
    • This article discusses a framework for understanding the nature and role of values in sociotechnical systems. The authors advocate for the theoretical characterization of values based on a system of “source dimensions” (describing the origins of values) and “attribute dimensions” (describing the traits of values). The work further engages with the role and effectiveness of different methods of studying values in social computing, such as ethnographies or content analyses, in relation to this framework.   
  • Surden, H. (2007). Structural rights in privacy. SMU Law Review60, 1605.*
    • This paper puts forth the thesis that privacy rights are primarily regulated not explicitly by the law, but implicitly by the presence of latent structural constraints which impose transaction costs to the violation of privacy. Subsequently, the author argues that a substantial portion of privacy becomes vulnerable as technology acts to reduce the magnitude of these structural constraints. The work offers a conceptual framework for identifying and responding to contexts in which this vulnerability may occur.
  • Susser, D., Roessler, B., & Nissenbaum, H. (2019). Technology, autonomy, and manipulation. Internet Policy Review8(2). https://www.doi.org/10.14763/2019.2.1410
    • This article explores the issue of “online manipulation”, which is alleged to occur when powerful technology companies use algorithms to shape online experiences with the goal of bringing forth a specific behavior in the user. The authors argue that such practices may be harmful both consequentially (in their impacts on the ethical and economic interests of users and society at large) and deontologically (in directly threatening individual autonomy). Emphasis is placed on the case study of the Cambridge Analytica and Facebook scandal, as well as the broader issue of election manipulation. 
  • Umbrello, S., & De Bellis, A. F. (2018). A value-sensitive design approach to intelligent agents. In R. Yampolskiy (Ed.), Artificial Intelligence Safety and Security (pp. 395-410). CRC Press.
    • This chapter discusses the methodology of “Value-Sensitive Design” and its implications for the design and implementation of artificially intelligent systems. The authors argue that value sensitivity must be proactively embedded through the entire AI development process. They act further to explore the limitations of “Value-Sensitive Design” and seek to identify ways in which it might be adapted to the specific challenge of working with AI.  
  • Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121-136.*
    • This article argues that power relations are embodied within technologies, imbuing the artifacts themselves with politics. First, the author discusses instances in which a specific technical device becomes a way of settling an issue in a particular community and acts to shape the power relations within that community. Second, he contends that some technologies are “inherently political”, in that they either require or are strongly compatible with certain kinds of political relationships.
  • Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Algorithmic decision-making and the control problem. Minds and Machines29(4), 555-578.
    • This paper discusses the “control problem”, wherein it is difficult for human actors to maintain meaningful oversight and control of largely automated systems. The authors build on a body of industrial-organizational psychology work and extend the topic to modern algorithmic actors, offering both a theoretical framework for understanding the problem and a series of design principles for overcoming it in human-machine systems.

Chapter 13. Race and Gender (Timnit Gebru)⬆︎

  • Amrute, S. (2019). Of techno-ethics and techno-affects. Feminist Review, 123(1), 56–73. https://doi.org/10.1177/0141778919879744 
    • This article considers the current state of digital labor conditions and identity formation, including uneven geographies of race, gender, class, ability, and histories of colonialism and inequality. The author highlights specific cases in which digital labor frames embodied subjects and proposes new ways in which digital laborer might train themselves to be empowered to identify emergent ethical concerns, using the concept of attunement as a framework for care. Predictive policing, data mining, and algorithmic racism are discussed, as is the urgency to include digital laborers in the design and analysis of algorithmic technologies and platforms.
  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing*
    • This investigative report documents and analyzes racial bias against black defendants from algorithmic criminal risk score systems, such as COMPAS, used by courts and parole boards in the United States to forecast future criminal behavior. The authors describe how the algorithmic formulas, and others like it, were written in a way that promotes racial disparity, resulting in black defendants being inaccurately identified as future criminals more frequently than white defendants. The report heavily implies that bias is inherent in all actuarial risk assessment instruments (ARAI), and that widespread audits and reassessments are necessary.
  • Atanasoski, N., & Vora, K. (2019). ​Surrogate humanity: Race, robots, and the politics of technological futures​. Duke University Press. https://www.dukeupress.edu/Assets/PubMaterials/978-1-4780-0386-1_601.pdf
    • This book traces the ways in which robots, artificial intelligence, and other technologies, serve as surrogates for human workers within a labor system defined by racial capitalism and patriarchy. The authors analyze technologies including sex robots, military drones, and sharing-economy platforms to illustrate how liberal structures of anti-blackness, settler colonialism, and patriarchy are fundamental to human and machine interactions. Through a critical feminist STS analysis of contemporary digital labor platforms, the authors address the global racial and gendered erasures underlying techno-utopian fantasies of a post-labor society, and consider the definitions of what it means to be a human.
  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. John Wiley & Sons. https://www.ruhabenjamin.com/race-after-technology*
    • Using critical race theory, this book analyzes how current technologies can and have reinforced White supremacy and increased social inequalities. The concept of The New Jim Code is introduced as a means of describing how a wide range of discriminatory designs can: 1. encode inequity by amplifying racial hierarchies, 2. ignoring and replicating social divisions, and 3. inadvertently reinforcing racial biases while intending to ‘fix’ them. This book concludes with an overview of conceptual strategies, including tech activism and abolitionists tools, that might be used to disrupt and rectify current and future technological design. 
  • Bolukbasi, T., Chang, K.W., Zou, J.Y., Saligrama, V., & Kalai, A.T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems (pp. 4349-4357). http://arxiv.org/abs/1607.06520*
    • This article examines the presence of gender bias within the popular framework of word embedding, which represents text data as vectors, used in many machine learning and natural language processing tasks. The authors found that gender bias and stereotyping, in line with greater societal bias, is common in many word embedding models, even those trained on large data sets, such as Google News articles. The article provides an algorithmic-based methodology for modifying embeddings in order to remove gender stereotypes, while maintaining desired associations.
  • Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press. https://doi.org/10.7551/mitpress/11022.001.0001*
    • This book describes society’s relationship with technology in the contemporary moment, taking a critical stance on how much computers are relied upon for daily tasks. This reliance, the author states, has prompted an overproduction of poorly designed and harmful systems. Through a series of interactions with current technologies, such as driverless cars and machine learning models, the author defines limits for which technology should and should not be applied, arguing against the prevalent framework of technochauvism, which upholds that technology is the solution to any and all problems.
  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In First Conference on Fairness, Accountability and Transparency (pp. 77-91). http://proceedings.mlr.press/v81/buolamwini18a.html.*
    • This conference paper investigates race and gender discrimination in machine learning algorithms, presenting an approach to the evaluation of bias in automated facial analysis algorithms and datasets with respect to the identification of phenotypic subgroups. The authors conclude that the darker-skinned females within their datasets were the most misclassified group, indicating substantial disparities in the accuracies of classifying individuals with varying skin types. As the authors stress, such biases require immediate attention in order to ensure fair, transparent, and accountable facial analysis algorithms are built into commercial technologies.
  • Chun, W. H. K. (2009). Introduction: Race and/as technology; Or, how to do things to race. Camera Obscura, 70(24). https://doi.org/10.1215/02705346-2008-013
    • This article discusses the interconnections between race and technology, discussing the various ways in which race can be defined and operationalized through societal and cultural understandings. Framing her discussion in past and current critical theory, the author describes race as a technique that is carefully constructed through a historical understanding of tools, mediation, and framings, that build identity and history. In conclusion the author states that in order to disrupt the concept of race, those of nature/culture, privacy/publicity, self/collective, and media/society, need to be reframed as well.
  • de la PEÑA, C. (2010). The history of technology, the resistance of archives, and the whiteness of Race. Technology and Culture, 51(4), 919–937. https://muse.jhu.edu/article/403272/pdf
    • Using the technological development of the X-Ray and artificial sweeteners as case studies, the author outlines the problem of sources and ‘Whiteness’ of the official archives, noting that significant contributions by members of marginalized races and genders have been left out of the record, and that documents and data supporting these stories continue to be elusive to those managing the archives. The book concludes by emphasizing the need for a shift in the perception that race is occluded from the archive, rather than the archive being constructed around the concept of whiteness.
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. https://virginia-eubanks.com/books/*
    • Considering the historic context of austerity, this book documents the use of digital technologies for distributional decision-making for social service delivery to poor and disadvantaged populations in the United States. Using ethnographic and interview methods, the author investigates the impact of automated systems such as Medicaid and Temporary Assistance for Needy Families, and electronic benefit transfer cards, stating that such systems, while expensive, are often less effective, and regularly reproduce and aggravate bias, equity disparities, and state surveillance of the poor. The author speaks to legacy system prejudice and the ‘social specs’ that underlie our decision-systems and data-sifting algorithms, and offers a number of participatory design solutions including empathy through co-design, transparency, access, and control of information.
  • Gangadharan, S. P. (Ed.). (2014). Data and discrimination: Collected essays. Open Technology Institute, New America Foundation. https://www.newamerica.org/oti/data-and-discrimination/
    • This book brings together work from eighteen researchers from various backgrounds looking at discriminatory impacts of big data and algorithms. Three themes are discussed: 1. Discovering and responding to harms, 2. Participation, presence, and politics, and 3. Fairness, equity, and impact. Many of the authors in this collection remark that there is a gap in public awareness of the extent to which algorithms influence their daily lives.
  • Hamidi, F., Scheuerman, M. K., & Branham, S. M. (2018). Gender recognition or gender reductionism?: The social implications of embedded gender recognition systems. In Proceedings of the ACM 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13). https://doi.org/10.1145/3173574.3173582*
    • This article investigates the social implications of automatic gender recognition (AGR) computational methods within the transgendered community. The authors interview thirteen transgendered individuals, including three technology designers, to document current perceptions and attitudes towards AGR. The article concludes that transgendered individuals have strong negative attitudes towards AGR, questioning whether it can be used to accurately identify their gender. Privacy and potential harms are discussed with respect to the impacts of being mis-identified, the authors include design recommendations to accommodate gender diversity.
  • Hicks, M. (2017). Programmed inequality: How Britain discarded women technologists and lost its edge in computing. MIT Press. http://programmedinequality.com/*
    • This book describes the history of feminized and gendered labor practices within Britain’s computer industry. Drawing from government files, personal interviews, and archives from the central British computing companies, the author describes how the neglect of the female labor force contributed to the industry’s short run from 1944-1974. The book concludes by describing how gendered discrimination still persists in the computing industry, leading to many women’s abandonment of the field, and compares the historic economic conditions in Britain to the current state of the industry in the United States.
  • Jasanoff, S. (Ed.). (2006). States of knowledge: The co-production of science and social order. Routledge. https://sheilajasanoff.org/research/co-production/.
    • A collection of essays by leading scholars in the field of science and technology studies (STS), outlining various papers discussing the relationships between political power and scientific knowledge. Central themes include ‘co-production’ describing how scientific knowledge is linked to understanding about social identity, institutions, discourse and representation; and critiques of the ‘view from nowhere,’ largely associated with traditional ontology and philosophies of science.
  • Lewis, J. E., Arista, N., Pechawis, A., & Kite, S. (2018). Making kin with the machines. Journal of Design and Science​.​ ​https://doi.org/10.21428/bfafd97b.
    • This article considers artificial intelligence through diverse Indigenous epistemologies, reflecting on traditional ways of knowing and speaking that acknowledge kinship networks connecting humans and nonhuman entities. As the author states, Indigenous communities have retained language and protocols to enable dialogue with non-human kin (such as AI), encouraging intelligible discourses across different materialities. Indigenous development environments (IDEs) are presented as a framework instituting Indigenous cultural values as fundamental aspects of all programming choices in order to instill greater public accountability into the design of AI systems.
  • Noble, S.U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. https://nyupress.org/9781479837243/algorithms-of-oppression/*
    • This book discusses how search engines, such as Google, are embedded with racial and sexist bias, challenging the notion that they are neutral algorithms acting outside of influence from their human engineers, and emphasizing the greater social impacts created through their design. Through an analysis of text and media searches, and research on paid advertising, the author argues that the monopoly status of a small group of companies alongside vested private interests in promoting some sites over others, has led to biased search algorithms that privilege whiteness and exhibit bias against people of color, particularly women.
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group. https://doi.org/10.5860/crl.78.3.403*
    • This book describes how algorithms, as mathematical models, are responsible for a large number of our daily decisions — from car loans, to health insurance, to students’ grades. However, these decision processes remain largely opaque and unregulated. In addition to this, the author argues, reigning societal faith in the fairness of mathematical systems, makes resistance very challenging when errors and discriminatory decision-making occurs. The author concludes with a call for greater responsibility with respect to regulation and algorithmic transparency. 
  • Schiller, A., & McMahon, J. (2019). Alexa, alert me when the revolution comes: Gender, affect, and labor in the age of home-based artificial intelligence. ​New Political Science,​ ​41(​2), 173–191.​ ​https://doi.org/10.1080/07393148.2019.1595288
    • This article uses Marxist feminism and theories of labor to interrogate gender, race, and affect within domestic artificial intelligence systems, such as Amazon’s Alexa or Google Home Assistant. The author describes how such devices make reproductive labor in households more visible, while simultaneously obscuring the gendered and racialized dimensions of their designs in order to streamline their effects for capital, and heighten the affective dynamics they draw from.
  • Stitzlein, S.M. (2004). Replacing the ‘view from nowhere’: A pragmatist-feminist science classroom. Electronic Journal of Science Education.*
    • This article takes a critical stance on current pedagogical models of science adhering to traditional, objective and empirical ‘nature-based’ philosophical models. Such frameworks are considered by the author to be problematically masculine, disembodied, and a perspectival. The author adopts a sociological methodology, analyzing teachers’ philosophies of science by studying classroom practices. An alternative pedagogical model based on pragmatic-feminism and intersectionality of a ‘lived world’ is proposed in response to the outdated, traditional ‘view from nowhere.’
  • West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html
    • The first report in the AI Now Institute’s multi-year project examining race, gender, and power in AI, presents a review of existing literature and current research on the topic of gender, race, and class. The report focuses on examining the scale of AI’s current diversity crisis and possible future strategies to mitigate its effects. The diversity problem within the AI industry and issues of bias in AI systems tend to be considered as separate issues, however, as this report points out, discrimination in the workforce and system building are intrinsically linked and will both need to be addressed in order to design an effective solution.

Chapter 14. The Future of Work in the Age of AI: Displacement or Risk-Shifting? (Pegah Moradi and Karen Levy)⬆︎

  • Autor, D. H., Levy, F., & Murnane, R. J. (2003). The skill content of recent technological change: An empirical exploration. The Quarterly Journal of Economics118(4), 1279-1333. https://doi.org/10.1162/003355303322552801*
    • This article argues that computers can substitute for workers in performing cognitive and manual tasks that can be accomplished by following explicit rules and complement workers in performing nonroutine problem solving and complex communications tasks. It demonstrates that the falling price of computer capital in recent decades has been the causal force increasing the demand for workers who can perform nonroutine tasks (i.e. college-educated) has increased.
  • Ball, K. (2010). Workplace surveillance: An overview. Labor History51(1), 87-106. https://doi.org/10.1080/00236561003654776
    • This article reviews research findings about surveillance in the workplace and the issues surrounding it. It establishes that organizations and surveillance go hand in hand, and that workplace surveillance can take social and technological forms. Further, it identifies that workplace surveillance has consequences for employees, affecting employee well-being, work culture, productivity, creativity and motivation. It also however highlights that employees are using information technologies to expose unsavory practices by employers and organizing collectively.
  • Braverman, H. (1998). Labor and monopoly capital: The degradation of work in the twentieth century. NYU Press.*
    • This book is an analysis of the science of managerial control, the relationship of technological innovation to social class, and the eradication of skill from work under capitalism. The book started what came to be known as the “the labor process debate”, which focuses closely on nature of “skill” and the decline in the use of skilled labor as a result of managers strategy for control.
  • Brynjolfsson, E., Mitchell, T., & Rock, D. (2018). What can machines learn, and what does it mean for occupations and the economy? In AEA Papers and Proceedings, 108, (Vol. 108, pp. 43-47). American Economic Association.*
    • This paper aims to answer the question of which occupational tasks will be most affected by machine learning (ML). Using a rubric evaluating tasks’ suitability for ML and applying it to over 18000 tasks, the paper finds that ML affects different occupations compared to previous waves of automation, most occupations have at least some tasks suitable for ML, few occupations are fully automatable using ML, and that realizing the potential of ML usually requires redesign of job task content.
  • Chui, M., Manyika, J., & Miremadi, M. (2015, November). Four fundamentals of workplace automation. McKinsey Digital. https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/four-fundamentals-of-workplace-automation.
    • This report argues that automation will lead to the redefinition of jobs rather than their replacement, and that this redefinition has occurred repeatedly during previous periods of rapid technological change. Adding to the conventional paradigm that low-skill, low-wage activities are most susceptible to automation, this report suggests that a significant percentage of the activities performed by even those in the highest-paid occupations (for example, financial planners, physicians, and senior executives) can be automated by current technology.
  • David, H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives29(3), 3-30.*
    • This article argues that while automation can substitute human labor, it also complements it, increasing productivity and labor demand overall. Changes in technology may alter which jobs are available, and what those jobs pay. The author concludes that automation should be thought of as replacing workers in performing routine, codifiable tasks while amplifying the advantage of workers in supplying problem-solving skills, adaptability, and creativity.
  • Dickens, W. T., Katz, L. F., Lang, K., & Summers, L. H. (1989). Employee crime and the monitoring puzzle. Journal of Labor Economics7(3), 331-347. https://doi.org/10.1086/298211
    • This paper investigates reasons why firms actually spend considerable resources trying to monitor for employee malfeasance, despite most economic theories of crime predicting that profit-maximizing firms should follow strategies of minimal monitoring with large penalties for employee crime. It finds that the most plausible explanations for firms’ spending and focus on monitoring of employees are legal restrictions on penalties in contracts, and the adverse impact of harsh punishment schemes on worker morale.
  • Doleac, J. L., & Hansen, B. (2016). Does “ban the box” help or hurt low-skilled workers? Statistical discrimination and employment outcomes when criminal histories are hidden (No. w22469). National Bureau of Economic Research.
    • New ‘ban the box’ (BTB) policies prevent employers from conducting criminal background checks until late in the job application process to improve employment outcomes for those with criminal records and reduce racial disparities in employment. This paper tests BTB’s effects, and finds that BTB policies actually decrease the probability of being employed by 5.1% for young, low-skilled black men, and by 2.9% for young, low-skilled Hispanic men. The paper argues that when an applicant’s criminal history is unavailable, employers still discriminate against demographic groups that they believe are likely to have a criminal record.
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
    • This book systematically investigates the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America. It does this through outlining the life-and-death impacts of automated decision-making on public services through three case studies relating to welfare provision, homelessness and child protection services in the US. The book concludes that we are still fighting the same civil rights problem of racialized and class-based inequalities.
  • Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation?. Technological Forecasting and Social Change114, 254-280. https://doi.org/10.1016/j.techfore.2016.08.019
    • In this paper, the authors calculate probabilities of computerization for 702 occupations using data about the task content of those jobs from the Department of Labor, and have artificial intelligence experts code the tasks for automation potential. The study estimates that 47% of US jobs are at high risk of automation within approximately twenty years. The article shows that wages and educational attainment exhibit a strong negative relationship with an occupation’s automation potential.
  • Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.*
    • In this book, the concept of “ghost work” is discussed, which refers to work done behind the scenes by an invisible human labor force that provides the internet and services by big tech companies with the appearance of smooth and “intelligent” function, through tasks such as flagging inappropriate content, proofreading, etc. The book explores problematic aspects of this growing sector including the lack of labor laws, precarity, lack of benefits, illegally low earnings, and more.
  • Helm, S., Kim, S. H., & Van Riper, S. (2018). Navigating the ‘retail apocalypse’: Aframework of consumer evaluations of the new retail landscape. Journal of Retailing and Consumer Services. https://doi.org/10.1016/j.jretconser.2018.09.015
    • This paper explores U.S. consumers’ evaluations of ongoing changes to the retail environment through content analysis of reader comments in response to articles on large-scale store closures, and online consumer interviews. The paper finds many consumers lamenting the disappearance of physical retailers, expecting negative consequences for themselves and society. However, other consumers are also accepting of a future with very few physical stores.
  • Kelley, M. R. (1990). New process technology, job design, and work organization: A contingency model. American Sociological Review, 55(2), 191-208. https://doi.org/10.2307/2095626
    • This paper aims to identify the conditions under which occupational skill upgrading occurs with technological change to answer the question of how workplaces that permit blue-collar occupations to take on higher skill responsibilities differ from those that do not. Data analyzed from a national survey of production managers in 21 industries reveals that the least complex organizations (small plant, small firm) tend to offer the greatest opportunities for skill upgrading, independent of techno-economic conditions.
  • Levy, F. (2018). Computers and populism: Artificial intelligence, jobs, and politics in the near term. Oxford Review of Economic Policy34(3), 393-417. https://doi.org/10.1093/oxrep/gry004
    • This paper examines the future of work in the next few years to examine whether job losses induced by artificial intelligence will increase the appeal of populist politics. The paper explains that often computers and machine learning automate workplace tasks of blue collar workers. Using the example of automation-related job losses in three industries (trucking, customer service, and manufacturing), the paper examines how candidates may pit ‘the people’ (truck drivers, call center operators, factory operatives) against ‘the elite’ (software developers, etc.), replicating populist politics of the 2016 US presidential election.
  • Levy, K., & Barocas, S. (2018). Refractive surveillance: Monitoring customers to manage workers. International Journal of Communication12, 1166-1188.*
    • This article discusses ‘refractive surveillance’, which is when information collected about one group can facilitate control over an entirely different group. The authors explore this dynamic in the context of retails stores, in which collecting data about customers allows for new form of managerial control over workers. Mechanisms enabling this are dynamic labor scheduling, new forms of evaluation, externalization of worker knowledge, and replacement through customer self-service.
  • Moradi, P. (2019). Race, ethnicity, and the future of [Doctoral dissertation, Cornell University]. https://doi.org/10.31235/osf.io/e37cu
    • This study analyzes how occupational automation corresponds with racial and ethnic demographics. The paper finds that throughout American industrialization, non-White and immigrant workers shifted to low-wage, unskilled work because of the political and social limitations imposed upon these groups. While White workers are more heavily affected by automatability than other racial groups, the proportion of White workers in an occupation is negatively correlated with an occupation’s automatability. The paper offers a susceptibility-based approach to predicting employment outcomes from AI-driven automation.
  • Polanyi, M. (2009). The tacit dimension. University of Chicago Press.*
    • This book argues that tacit knowledge—tradition, inherited practices, implied values, and prejudgments—is a crucial part of scientific knowledge. This book challenges the assumption that skepticism, rather than established belief, lies at the core of scientific discovery. It concludes that all knowledge is personal, with the indispensable participation of the thinking being, and that even the so-called explicit knowing (or formal, or specifiable knowledge) is always based on personal mechanisms of tacit knowing.
  • Rogers, B. (2020). The law & political economy of workplace technological change. Harvard Civil Rights-Civil Liberties Law Review, 55. http://dx.doi.org/10.2139/ssrn.3327608*
    • This paper makes the case that automation is not a major threat to most jobs today, nor will it be in the near future. However, it points out that existing labour laws allow companies to leverage new technology to control workers, such as through enhanced monitoring. It argues that policymakers must expand the scope and stringency of companies’ duties toward their workers, or rewrite policies in ways that enable workers to push back against the introduction of new workplace technologies.
  • Rosenblat, A., Levy, K. E., Barocas, S., & Hwang, T. (2017). Discriminating tastes: Uber’s customer ratings as vehicles for workplace discrimination. Policy & Internet9(3), 256-279. https://doi.org/10.1002/poi3.153.
    • This paper analyzes the Uber platform as a case study to explore how bias may creep into evaluations of drivers through consumer‐sourced rating systems, and draws on social science research to demonstrate how such bias emerges in other types of rating and evaluation systems. The paper argues that while companies are legally prohibited from making employment decisions based on certain characteristics of workers (e.g. race), their reliance on potentially biased consumer ratings to make material determinations may nonetheless lead to a disparate impact in employment outcomes.
  • Schneider, D., & Harknett, K. (2016). Schedule instability and unpredictability and worker and Ffmily health and wellbeing. Washington Center for Equitable Growth Working Paper Series. http://cdn.equitablegrowth.org/wp-content/uploads/2016/09/12135618/091216-WP-Schedule-instability-and-unpredictability.pdf.
    • This paper describes an innovative approach to survey data collection from service sector workers that allows for the collection of previously unavailable data on scheduling practices, health and wellbeing. They then use this data to show that exposure to unstable and unpredictable scheduling practices is negatively associated with household financial security, worker health, and parenting practices.
  • Thomas, R. J. (1994). What machines can’t do: Politics and technology in the industrial enterprise. University of California Press.
    • This book explores the social and political dynamics that are an integral part of production technology through conducting over 300 interviews inside four successful manufacturing enterprises, from top corporate executives to engineers to workers and union representatives. The author urges managers to not put blind hopes into smarter machines but to find smarter ways to organize people, and argues against the popular idea that smart machines alone will lead to advancement.
  • Tippett, E., Alexander, C. S., & Eigen, Z. J. (2017). When timekeeping software undermines compliance. Yale Journal of Law and Technology19(1).
    • This article examines 13 commonly used electronic timekeeping programs to expose the ways in which it can erode wage law compliance. Drawing on insights from the field of behavioral compliance, the authors explain how the software presents subtle cues that can encourage and legitimize wage theft by employers. The article examines gaps in legislation that have created a regulatory vacuum in which timekeeping software has developed, and reforms to encourage wage law compliance across workplaces.

Chapter 15. AI as a Moral Right-Holder (John Basl and Joseph Bowen)⬆︎

  • Agar, N. (2019). How to treat machines that might have minds. Philosophy & Technology.     https://doi.org/10.1007/s13347-019-00357-8  
    • This research article examines the issue of how one should interact with machines that one has reason to believe could have minds. It is argued that one should approach interactions with such machines by assigning credence to judgements concerning whether or not they can think. It is suggested that machines that are capable of performing all intelligent human behaviour lend plausibility to the notion that machines can think.
  • Basl, J. (2013). The ethics of creating artificial consciousness. APA Newsletter on Philosophy and Computers, 13(1), 23-29.
    • This essay notes that research that aims to create artificial entities with conscious states might be unethical because it wrongs, or will likely wrong, the subjects of such research. If the subjects of artificial consciousness research end up possessing conscious states, then they are research subjects in the way that sentient non-human animals and human beings are research subjects. As a result, such artificially conscious research subjects should be afforded certain protections.
  • Basl, J. (2014). Machines as moral patients we shouldn’t care about (yet): The interests and welfare of current machines. Philosophy & Technology27(1), 79-96.         https://doi.org/10.1007/s13347-013-0122-y
    • This paper discusses whether or not machines, as they currently exist, possess the requisite capacities to be considered moral patients. It is argued that current machines should be treated as if they lack the capacities that would give rise to psychological interests. Consequently, current machines are moral patients only if they possess non-psychological interests. After examining the most plausible type of non-psychological interests one might attribute to current machines, it is concluded that such machines are not moral patients.
  • Basl, J. (2014). What to do about artificial consciousness. In R. L. Sandler (Ed.), Ethics and emerging technologies (pp. 380-392). Palgrave Macmillan.
    • This chapter defends a capacity-based account of moral status, according to which the moral status of an entity is determined by its capacities. From this it follows that if an intelligent machine possesses cognitive and psychological capacities akin to those of human beings, then such entities are deserving of comparable moral status. Nevertheless, it is argued that it is unlikely that machines will possess cognitive and psychological capacities akin to those of human beings; moreover, even if they do, it will be difficult for human beings to discern such capacities and interests.
  • Basl, J. (2019). The death of the ethic of life. Oxford University Press.*
    • If one subscribes to an Ethic of Life, then one holds that all living things deserve some degree of moral concern. This book contends that the well-being of non-sentient beings is morally significant only insofar as it matters to sentient beings. It is argued that the Ethic of Life fails to distinguish artifacts from organisms. This provides one with reason to abandon the Ethic of Life.
  • Basl, J., & Sandler, R. (2013). The good of non-sentient entities: Organisms, artifacts, and synthetic biology. Studies in History and Philosophy of Science Part C: Studies in History and  Philosophy of Biological and Biomedical Sciences44(4), 697-705.  https://doi.org/10.1016/j.shpsc.2013.05.017
    • This paper examines whether or not synthetic organisms have a good of their own and, consequently, are themselves deserving of moral consideration. Appealing to an account of teleology that explains the good of non-sentient organisms, it is argued that synthetic organisms also have a good of their own grounded in their teleological organization. However, this discussion has the consequence that traditional artifacts also have a good of their own.
  • Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology12(3), 209-221. https://doi.org/10.1007/s10676-010-9235-5
    • Although robots of the near future will not meet the criteria for rights set out by deontological and utilitarian approaches, this paper highlights other conceptual resources that are available to confer some degree of moral consideration upon such robots. A novel argument is offered that employs a social-relational justification of moral consideration. This approach grants moral consideration within a dynamic relation between human beings and the entities under question.
  • Cruft, R. (2013). XI—Why is it disrespectful to violate rights? Proceedings of the Aristotelian Society, 113(2), 201-224. https://doi.org/10.1111/j.1467-9264.2013.00352.x*
    • Directed duties are duties that are owed to a particular person or group. This paper considers the manner in which directed duties are related to respect. It also works to make sense of the fact that directed duties are often justified independently of whether or not they do anything for those to whom the duties are owed.
  • Danaher, J. (2019). Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics, 1-27. https://doi.org/10.1007/s11948-019-00119-x
    • This paper proposes a theory of ethical behaviorism, according to which robots can possess significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. It is argued that this performative threshold may not exceed the reach of robots and, if robots have not done so already, they may cross the threshold in the future. The paper proposes a principle of procreative beneficence that governs the decision to create robots that possess moral status.
  • Griffin, J. (1986). Well-being: Its meaning, measurement and moral importance. Clarendon Press.*
    • This book aims to understand the notion of well-being. It examines the place of both reason and desire in our way of thinking about well-being. The book proceeds by reflecting on our ability to measure well-being, and how we should incorporate the notion into the realm of moral and political thought.
  • Gunkel, D. J. (2014). A vindication of the rights of machines. Philosophy & Technology27(1),  113-132. https://doi.org/10.1007/s13347-013-0121-z
    • This essay argues that artifacts, such as robots, can no longer be excluded legitimately from moral consideration. The essay does not attempt to accommodate the nature of such artifacts to requirements set out by moral philosophy; rather, it questions the systemic limitations of moral reasoning. As a result, beyond extending rights to machines, the essay scrutinizes the manner in which moral standing has been understood in the past.
  • Gunkel, D. J. (2018). Robot rights. MIT Press.
    • This book reflects upon the potential for robots and other technological artifacts to possess moral and legal standing. It is argued that none of the existing proposals concerning robot rights survive scrutiny. As a result, a new proposal is offered by appealing to the philosophy of Emmanuel Levinas and discussing the manner in which robots are experienced in their relationship to oneself.
  • Gunkel, D. J. (2018). The other question: Can and should robots have rights? Ethics and  Information Technology20(2), 87-99. https://doi.org/10.1007/s10676-017-9442-4
    • This paper engages with the question of whether or not robots should have rights. In doing so, it examines how the terms “can” and “should” figure in discussions surrounding the is-ought problem. The paper turns its attention to the work of Emmanuel Levinas in order to reformulate the manner in which one asks about moral patiency in the first place. It discusses the view that moral consideration is conferred in the face of actual social relationships and interactions, rather than pre-determined ontological criteria or capability.
  • Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology20(4), 291-301. https://doi.org/10.1007/s10676-018-9481-5
    • This paper contends that analogies between humanoid robots and animals are misleading in one’s efforts to understand the moral status of humanoid robots, legal liability, and the impact of human treatment of humanoid robots on the manner in which humans treat one another. As a result, analogies with animals do not provide a useful method of understanding the nature of robots, and responsible discourse concerning the nature of robots should be cautious in its appeal to analogies with animals.
  • Kramer, M. H. (2001). Getting rights right. In M. H. Kramer (Ed.), Rights, wrongs and responsibilities (pp. 28-95). Palgrave Macmillan.*
    • The Interest Theory holds that the essence of a right consists in the normative protection of some aspect(s) of the right-holder’s well-being. In contrast, the Will Theory claims that the essence of a right consists in the right holder’s opportunities to make normatively significant choices relating to the behaviour of others. This essay aims to clarify and develop the basic claims of the Interest Theory and the Will Theory in hopes of establishing the superiority of the former.
  • McGinn, C. (1999). The mysterious flame. Basic Books.*
    • This book examines the nature of consciousness. It argues that one can never truly “know” consciousness. That is, the human intellect is unequipped to understand the nature of consciousness.
  • Miller, L. F. (2015). Granting automata human rights: Challenge to a basis of full-rights privilege. Human Rights Review16(4), 369-391. https://doi.org/10.1007/s12142-015-0387-x
    • This paper examines whether or not human beings are required by morality to extend full human rights to humanlike automata. In examining this issue, the paper reflects on the ontological difference between human beings and automata, namely, that automata have a constructor and a given purpose. It is argued that human beings need not be under any moral obligation to confer full human rights to automata.
  • Neely, E. L. (2014). Machines and the moral community. Philosophy & Technology27(1), 97-111.     https://doi.org/10.1007/s13347-013-0114-y
    • This paper offers an interest-based account for determining the moral status of an entity. That is, if a being has interests, then it is wrong to ignore those interests or to harm them in the absence of an acceptable overriding reason. On the basis of this view, it is concluded that conscious, self-aware machines and autonomous, intelligent machines should be considered moral patients.
  • Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers.
    • This book examines emerging ethical issues concerning human beings, robots, and agency. In its discussion of robot rights, it is argued that it can sometimes make sense to treat robots with some degree of moral consideration; for instance, in cases where robots look and act like human or non-human animals. Nevertheless, robots are not themselves deserving of direct duties until they develop a human- or animal-like inner life.
  • Raz, J. (1986). The morality of freedom. Clarendon Press.*
    • This book discusses the nature of freedom and authority. It argues that a concern with autonomy underlies rights and the value of freedom. The book highlights the requirement that individuals have a multitude of valuable options from which to choose in order for autonomy to be achieved.
  • Schwitzgebel, E., & Garza, M. (2015). A defense of the rights of artificial intelligences. Midwest Studies in Philosophy39(1), 98-119. https://doi.org/10.1111/misp.12032
    • This paper provides a positive argument for the rights of artificially intelligent entities. Two principles of ethical AI design are offered; namely, (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. It is also argued that human beings would probably owe more moral consideration to human-grade artificial intelligences than is owed to human strangers.
  • Sumner, L. W. (1996). Welfare, happiness, and ethics. Clarendon Press.*
    • This book presents an original theory of welfare which closely connects welfare with happiness or life satisfaction. It provides a defence of welfarism, which argues that welfare is the only basic ethical value. That is, welfare is the only thing for which one has a moral reason to promote for its own sake.
  • Tavani, H. T. (2018). Can social robots qualify for moral consideration? Reframing the question about robot rights. Information9(4), 1-16. https://doi.org/10.3390/info9040073
    • This paper contends that the question of whether or not robots deserve rights needs to be reframed and refined, asking instead whether or not social robots qualify for moral consideration as moral patients. Social robots are understood as physically embodied robots that are socially intelligent and interact with humans in a manner similar to the way humans interact with one another. The paper appeals to the work of Hans Jonas in arguing for the conclusion that social robots are moral patients and, consequently, deserve moral consideration.
  • Thomson, J. J. (1990). The realm of rights. Harvard University Press.*
    • This book asks why one’s having rights is a morally significant fact about oneself. It is argued that a person’s having a right is reducible to a complex moral constraint. A central feature of this constraint is that, other things being equal, the right ought to be accorded. The book also discusses the trade-offs that serve to relieve one’s requirement to accord a right.

Chapter 16. Could You Merge with AI? Reflections on the Singularity and Radical Brain Enhancement (Cody Turner and Susan Schneider)⬆︎

  • Bostrom, N. & Roache, R. (2007). Ethical issues in human enhancement. In T. S. Petersen, J. Ryberg & C. Wolf (Eds.), New waves in applied ethics (pp. 120-152). Palgrave Macmillan. *
    • A survey of issues in human enhancement ethics. Schneider and Turner highlight the authors’ coverage of the therapy / enhancement distinction. As the authors point out, this distinction is often ambiguous, and some thinkers reject it altogether.
  • Bostrom, N. & Roache, R. (2011). Smart policy: Cognitive enhancement and the public interest. In J. Savulescu, R. T. Meulen & G. Kahane (Eds.), Enhancing human capacities (pp. 138-152). Wiley-Blackwell. *
    • This paper discusses the nature and ethics of cognitive enhancement. The authors address a number of related policy issues, including drug approval criteria, research funding, and regulation of access.
  • Bostrom, N. (2014). Superintelligence: Path, dangers, strategies. Oxford University Press. *
    • This book covers the history of artificial intelligence, paths to superintelligence, and forms the latter may take, including brain-computer interfaces. Bostrom then considers the prospect of an intelligence explosion, and several challenges posed by the control problem.
  • Buchanan, A. (2011). Beyond humanity? The ethics of biomedical enhancement. Oxford University Press.
    • This book addresses a number of issues in the context of human enhancement, including the therapy / enhancement distinction, human development, character concerns, human nature, conservatism, unintended bad consequences, moral status, and distributive justice. The author offers a general outlook that is, if not pro-enhancement, then anti-anti-enhancement.
  • Chalmers, D. J. (2016). The singularity: A philosophical analysis. In S. Schneider (Ed.), Science fiction and philosophy: From time travel to superintelligence (2nd ed., pp. 171-224). Wiley-Blackwell. *
    • This paper offers a comprehensive study of the singularity. The author explains the logic behind the singularity, as well as how it may be promoted – or not. He then discusses mind-uploading and personal identity, in the context of surviving in a post-singularity world.
  • Clark, A. & Chalmers, D. J. (1998). The extended mind. Analysis, 58(1), 7-19. http://dx.doi.org/10.1093/analys/58.1.7*
    • The authors’ extended mind hypothesis suggests they play an active role in our mental processes, which has implications for how such devices are conceptualized as wrapped up in our very identities.
  • Fukuyama, F. (2002). Our posthuman future: Consequences of the biotechnology revolution. Picador.
    • This book contributes to the discussion of human enhancement ethics. The author argues that transhumanism is the world’s most dangerous idea, because tampering with human nature threatens to undermine the basis for human dignity and rights. This book reflects on the future of biotechnology, and how it might be regulated.
  • Gleiser, M. (2015). Welcome to your transhuman self. In J. Brockman (Ed.), What to think about machines that think: Today’s leading thinkers on the age of machine intelligence (pp. 54-55). Harper Perennial.
    • This paper reflects on the human-machine integration scenario. The author points out that this process of cyborgization is already underway, with cell phones and social media existing along the same spectrum as mechanical limbs and brain implants.
  • Hume, D. (1985). A treatise of human nature (E. C. Mossner, Ed.). Penguin Classics.
    • This book is notable for its chapter on personal identity. Hume expresses a skeptical view of personal identity, or the self, now known as bundle theory. Essentially, humans are collections of impressions, constantly in flux. There is no ‘I’ over and above these impressions which can be said to possess them.
  • Kagan, S. (2012). Death. Yale University Press.
    • This book is a survey of philosophical issues related to death, including, for our purposes, personal identity, and different criteria thereof, such as the soul, body, and mind. Kagan himself endorses the body criterion, but believes persistence of personality is what matters in survival. This distinction, due to Parfit (cited below), has interesting implications for some of the scenarios explored by Schneider and Turner. With mind-uploading, for example, it may be the case that one dies, but this does not matter.
  • Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking. *
    • Elaborates on exponential growth in science and technology, with a focus on the intersection of genetics, robotics, and nanotechnology. Kurzweil then anticipates how it will transform the human body, brain, and, more generally, our very way of life, on up to the mind-uploading scenario.
  • Locke, J. (1997). An essay concerning human understanding (R. Woolhouse, Ed.). Penguin Classics.*
    • Notable here for its chapter on personal identity. Locke presents a number of original thought experiments designed to test our intuitions about what we really are. He ultimately defends a psychological criterion of personal identity; in particular, psychological connectedness, with an emphasis on memory.
  • Nagel, T. (1979). Mortal questions. Cambridge University Press.
    • This book contains the classic essay, “What is it like to be a bat?”. In this paper, Nagel makes sense of the latter as ‘what it is like-ness’ and reflects on the hard problem of consciousness.
  • Nietzsche, F. (2013). On the genealogy of morals: A polemic (M. A. Scarpitti, Trans.). Penguin Classics.
    • Nieztsche provides another take on the view that the self is an illusion, or grammatical fiction. There are actions, but no agents.

Chapter 17. Are Sentient AIs Persons? (Mark Kingwell)⬆︎

  • Anderson, S. L. (2016). Asimov’s “three laws of robotics” and machine metaethics. In S. Schneider (Ed.), Science fiction and philosophy: From time travel to superintelligence (2nd ed., pp. 290-307). Wiley-Blackwell.
    • This chapter argues that treating intelligent robots like slaves would be misguided. Such entities could, in principle, follow and advise on ethical principles better than most humans, and even warrant consideration for moral standing, or rights.
  • Basl, J. & Sandler, R. (2013). The good of non-sentient entities: Organisms, artifacts, and synthetic biology. Studies in History and Philosophy of Biological and Biomedical Sciences, 44(4), 697-705. http://dx.doi.org/10.1016/j.shpsc.2013.05.017
    • Basl and Sandler employ an etiological account of teleology to demonstrate that certain non-sentient entities can have a good. The authors do not mean for this notion of ‘good’ to be understood in a morally loaded sense, although it may contribute to the project of machine ethics with a teleological basis, and lay the groundwork for a broader conception of moral standing, or rights.
  • Bentham, J. (2018). An introduction to the principles of morals and legislation. Forgotten Books.
    • This is the first major work on classical utilitarianism, which provides one lens through which  the ethical status of machines may be viewed. Bentham lays out the principle of utility, famously dismissing the idea of natural rights as ‘nonsense upon stilts.’ For him, moral standing is conferred not by the capacity to think or speak but suffer.
  • Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.
    • Chalmers defends the view that consciousness is irreducibly subjective. Of particular interest here, he supports the possibility of artificial general intelligence, taking on Searle’s Chinese room argument, among other objections.
  • Deutsch, D. (2019). Beyond reward and punishment. In J. Brockman (Ed.), Possible minds: 25 ways of looking at AI (pp. 113-124). Penguin Press.
    • Deutsch argues that certain misconceptions about human thinking have led to misconceptions about machine thinking. He demonstrates the inadequacy of Bayesian updating approaches to artificial general intelligence, and the need to better understand creativity. Artificial general intelligence – machines with no specifiable functionality – is achievable, however, and such entities would be persons.
  • Dick, P. K. (1968). Do androids dream of electric sheep? Doubleday. *
    • A science fiction classic, following one bounty hunter’s pursuit of runaway androids. The novel raises philosophical issues, such as the possibility of empathic machines.
  • Dragan, A. (2019). Putting the human into the AI equation. In J. Brockman (Ed.), Possible minds: 25 ways of looking at AI (pp. 134-142). Penguin Press.
    • Highlights the importance of defining human-compatible AI in the context of the coordination problem and the value-alignment problem. Our relationship with intelligent machines should go both ways; that is, robots must model people, and people must model robots – properly.
  • Freud, S. (2003). The uncanny (D. McLintock, Trans.). Penguin Classics. *
    • Contains an essay by Freud of the same title, wherein he analyzes the concept of uncanniness. Freud discusses a number of uncanny motifs, such as the automaton.
  • Gleiser, M. (2015). Welcome to your transhuman self. In J. Brockman (Ed.), What to think about machines that think: Today’s leading thinkers on the age of machine intelligence (pp. 54-55). Harper Perennial.
    • This chapter reflects on the human-machine integration scenario. Gleiser points out that this process of cyborgization is already underway, with cell phones and social media existing along the same spectrum as mechanical limbs and brain implants.
  • Hayward, T. (2005). Constitutional environmental rights. Oxford University Press.*
    • This book makes the case for the human right to an adequate environment. This would be a right to nature, rather than a right of nature. One might consider a similar arrangement for some robots, or artificial intelligence systems, where rights concerning them are conceived as an extension of human rights. We already observe discussion about rights to technology, as with calls for a right to the Internet.
  • Johnson, D. G. & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20, 291-301. https://doi.org/10.1007/s10676-018-9481-5
    • The authors suggest that the analogies sometimes drawn between animals and robots, in relation to how humans might think about interacting with the latter, are misleading. For example, the authors do not believe robots can suffer, which has implications for moral status and rights.
  • Kant, I. (1993). Grounding for the metaphysics of morals (3rd ed., J. W. Ellington, Trans.). Hackett Publishing Company. (Original work published 1785)*
    • A central work in deontological ethics, as well as moral philosophy and rights theory more generally. This work contains arguments for the dignity and sovereignty of all moral agents.
  • Kymlicka, W. (1995). Multicultural citizenship: A liberal theory of minority rights. Oxford University Press.*
    • Liberal theory commonly construes rights as individualistic. Kymlicka argues that this tradition is compatible with a more collective understanding of them. These might concern language rights, group representation, or religious education – not at the level of particular people, but entire identities. Notice that, as with rights attributed to animals, or the environment, this is a case where the bearer of rights is unable to explicitly claim them, which may also apply to some artefacts, and robots.
  • Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press.
    • Korsgaard challenges Kant’s view that our obligations to non-human animals are indirect – say, to cultivate certain morally appropriate sensibilities. All sentient creatures have a good and, in a sense, warrant treatment as ends-in-themselves. Korsgaard’s account suggests how strands of Aristotelian and Kantian thought might imply regard for conscious machines.
  • Locke, J. (1980). Second treatise of government (C. B. Macpherson, Ed.). Hackett Publishing Company.*
    • A canonical source on the social contract and natural rights, which may influence how we think about their application to artificial intelligence. Pivotal in the development of liberal norms, the text defends a basis for personal freedom and private property, as well as ownership of one’s body and labour.
  • Merleau-Ponty, M. (2012). Phenomenology of perception (D. A. Landes, Trans.). Routledge.
    • A text in the tradition of French existentialism, Merleau-Ponty elaborates on the primacy of perception. His discussion includes the topic of embodied phenomenology, which has influenced subsequent thinking about embodied cognition, and its relevance to artificial intelligence.
  • Pinker, S. (2015). Thinking does not imply subjugating. In J. Brockman (Ed.), What to think about machines that think: Today’s leading thinkers on the age of machine intelligence (pp. 5-8). Harper Perennial.
    • Pinker explains how a naturalistic, computational theory of reason opens the door to thinking machines. However, our fear of this prospect is unfounded, insofar as it stems from the projection of a malevolent, domineering psychology onto the very concept of intelligence.
  • Robertson, G. (2013). Crimes against humanity: The struggle for global justice (4th ed.). New Press.*
    • Includes numerous examples of contemporary crimes against humanity. Relevant here for the distinction between these and war crimes.
  • Scanlon, T. M. (1998). What we owe to each other. Belknap Press.
    • Presents a modern form of contractualism. In the first part of the book, Scanlon argues for reasons fundamentalism, as well as against consequentialism and hedonism. In the second part of the book, he provides an account of wrongness as that which one could reasonably reject. Scanlon suggests entities that cannot speak for themselves may nevertheless be accommodated by his system through advocates. Humans could, perhaps, assume the role of trustee to represent the interests of machines.
  • Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417-457. http://dx.doi.org/10.1017/S0140525X00005756*
    • Includes Searle’s Chinese room argument, the upshot of which is that programs run by digital computers cannot be shown to possess understanding, or consciousness. The argument opposes functionalism and computationalism in philosophy of mind, as well as the possibility of artificial general intelligence.
  • Shelley, M. (2013). Frankenstein; or, The modern prometheus (M. Hindle, Ed.). Penguin Classics. (Original work published 1818)*
    • A gothic horror and science fiction classic, Frankenstein depicts a scientist by that same name, who succeeds in creating intelligent life.
  • Singer, P. (2009). Animal liberation (Updated edition). HarperCollins Publishers.*
    • A major contribution to the animal liberation movement. Singer’s argument for the equality of animals rests not on some conception of rights, but a preference utilitarian perspective. Exemplifies the theme of our expanding moral circle, and how it may grow to include conscious machines.
  • Stamos, D. N. (2016). The myth of universal human rights: Its origin, history, and explanation, along with a more humane way. Routledge.
    • Engages in an evolutionary debunking of universal human rights. Stamos develops on the idea that natural selection reveals the category of ‘human’ to be an unstable one.
  • Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Alfred A. Knopf.
    • Covers the topics of intelligence, goal-directedness, and the future of artificial intelligence. Tegmark proposes a theory of consciousness according to which subjective experience is a matter of information being processed in a particular kind of way. He places this in the context of a broadly utilitarian ethic, which ascribes moral standing to conscious machines.
  • United Nations. (1948, December 10). Universal declaration of human rights. https://www.un.org/en/universal-declaration-human-rights/*
    • A significant 20th century document on the establishment of universal human rights. Its 30 articles were adopted under United Nations Resolution 217 in Paris, on December 10th, 1948.

Chapter 18. Autonomy (Michael Wheeler)⬆︎

  • Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence12(3), 251-261.*
    • This paper surveys ethical disputes, the possibility of a ‘moral Turing Test’ is considered and the computational difficulties accompanying the different types of approach are assessed. Human-like performance, which is prone to include immoral actions, may not be acceptable in machines, but moral perfection may be computationally unattainable. The risks posed by autonomous machines ignorantly or deliberately harming people and other sentient beings are great. The development of machines with enough intelligence to assess the effects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of artificially intelligent automata.
  • Arkin, R. C. (2010). The case for ethical autonomy in unmanned systems. Journal of Military Ethics9(4), 332-341.*
    • The underlying thesis of the research in ethical autonomy for lethal autonomous unmanned systems is that they will potentially be capable of performing more ethically on the battlefield than are human soldiers. In this article this hypothesis is supported by ongoing and foreseen technological advances and perhaps equally important by an assessment of the fundamental ability of human war fighters in today’s battlespace. If this goal of better-than-human performance is achieved, even if still imperfect, it can result in a reduction in non-combatant casualties and property damage consistent with adherence to the Laws of War as prescribed in international treaties and conventions and is thus worth pursuing vigorously.
  • Asaro, P. (2008). How just could a robot war be? In P. Brey, A. Briggle & K. Waelbers (Eds.), Current issues in computing and philosophy (pp. 50-64). Ios Press.*
    • This paper considers the fundamental issues of justice involved in the application of autonomous and semi-autonomous robots in warfare, beginning with an analysis of how robots may fit into the framework of just war theory. It considers how robots, “smart” bombs, and other autonomous technologies might challenge the principles of just war theory, and how international law might be designed to regulate them. It concludes that deep contradictions arise in the principles intended to govern warfare and our intuitions regarding the application of autonomous technologies to war fighting.
  • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Jean-Bonnefon, F., & Rahwan, I.  (2018). The moral machine experiment. Nature, 563(7729), 59–64.*
    • To address the challenge of quantifying societal expectations of ethical principles that should guide machine behaviour, the authors deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. Here, the authors describe the results of this experiment. The paper summarizes global moral preferences; documents individual variations in preferences, based on respondents’ demographics; and reports cross-cultural ethical variation, and uncovering three major clusters of countries. Finally, the authors argue that these differences correlate with modern institutions and deep cultural traits.
  • Boden, M. A. (1996). Autonomy and artificiality. In M. A. Boden (Ed.) The philosophy of artificial life (pp. 95-107). Oxford University Press.*
    • This new volume in the acclaimed Oxford Readings in Philosophy series offers a selection of the most important philosophical work being done in the new and fast-growing interdisciplinary area of artificial life. Artificial life research seeks to synthesize the characteristics of life by artificial means, particularly employing computer technology. The essays here explore such themes as the nature of life, the relation between life and mind, and the limits of technology.
  • Borden, M. A. (2016). AI: Its nature and future. Oxford University Press.
    • This book describes how research in artificial intelligence has provided fruitful results in robotics and theoretical biology and covers the history of the increasingly specialized field of AI, highlighting its successes and looking towards its future. Finally, it argues that AI has been valuable in helping to understand the mental processes of memory, learning and language for living creatures.
  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.*  
    • This book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.
  • Coeckelbergh, M. (2013). Drones, information technology, and distance: Mapping the moral epistemology of remote fighting. Ethics and information technology15(2), 87-98.
    • This paper argues that drone fighting, like other long-range fighting, creates epistemic and moral distance in so far as ‘screen fighting’ implies the disappearance of the vulnerable face and body of the opponent and thus removes moral-psychological barriers to killing. However, the paper also argues that this influence is at least weakened by current surveillance technologies, which make possible a kind of ‘empathic bridging’ by which the fighter’s opponent on the ground is re-humanized, re-faced, and re-embodied. The paper asserts that ‘mutation’ or unintended ‘hacking’ of the practice is a problem for drone pilots and for those who order them to kill but revealing its moral-epistemic possibilities opens up new avenues for imagining morally better ways of technology-mediated fighting.
  • Dennett, D. C. (1984). Elbow room: The varieties of free will worth wanting. MIT Press.*
    • This book argues that classical formulations of the free will problem in philosophy depend on misuses of imagination, and the author disentangles the philosophical problems of real interest from the “family of anxieties” they get enmeshed in – imaginary agents, bogeymen, and dire prospects that seem to threaten our freedom. The author examines the problem of how anyone can ever be guilty, and what the rationale is for holding people responsible and even, on occasion, punishing them.
  • Gunkel, D. J. (2017). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology. https://doi.org/10.1007/s10676-017-9428-2
    • This essay responds to the question concerning robots and responsibility, by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. The essay considers three instances where recent innovations in robotics challenge this standard operating procedure by opening gaps in the usual way of assigning responsibility. Finally, the essay concludes by evaluating the three different responses—instrumentalism 2.0, machine ethics, and hybrid responsibility—that have been made in face of these difficulties in an effort to map out the opportunities and challenges of and for responsible robotics.
  • Heyns, C. (2017). Autonomous weapons in armed conflict and the right to a dignified life: An African perspective. South African Journal on Human Rights33(1), 46-71.*
    • This article argues that the question that will haunt the future debate over autonomous weapons is: What if technology develops to the point where it is clear that fully autonomous weapons surpass human targeting, and can potentially save many lives? Would human rights considerations in such a case not militate for the use of autonomous weapons, instead of against it? This article argues that the rights to life and dignity demand that even under such circumstances, full autonomy in force delivery should not be allowed. The article emphasises the importance placed on the concept of a ‘dignified life’ in the African human rights system.
  • Mindell, D.A. (2015). Our robots, ourselves: Robotics and the myths of autonomy. Penguin.*
    • This book argues that the stark lines we’ve drawn between human and not human, manual and automated, are not helpful for understanding our relationship with robotics. The book clarifies misconceptions about the autonomous robot, offering instead a hopeful message about what the author calls “rich human presence” at the center of the technological landscape we are now creating.  
  • Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., & Petersen, S. (2015). Human-level control through deep reinforcement learning. Nature518(7540), 529-533.*
    • In this paper, the authors use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. The research demonstrates that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
  • Niker, F., Reiner, P. B., & Felsen, G. (2018). Updating ourselves: Synthesizing philosophical and neurobiological perspectives on incorporating new information into our worldview. Neuroethics11(3), 273-282.*
    • This paper argues of the importance to theories of autonomous agency of the capacity to appropriately adapt our values and beliefs, in light of relevant experiences and evidence, to changing circumstances. It presents a plausible philosophical account of this process, which is generally applicable to theories about the nature of autonomy, both internalist and externalist alike. The paper then evaluates this account by providing a model for how the incorporation of values might occur in the brain; one that is inspired by recent theoretical and empirical advances in our understanding of the neural processes by which our beliefs are updated by new information.
  • Lin, P. (2016). Why ethics matters for autonomous cars. In M. Maurer, C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomous Driving: Technical, Legal and Social Aspects (pp. 69-85). Springer.*
    • This chapter explains why ethics matters for autonomous road vehicles, looking at the most urgent area of their programming. The chapter acknowledges that as nearly all of this work is still in front of the industry, the questions raised do not have any definitive answers at such an early stage of the technology.
  • Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., & Hadsell, R. (2016). Progressive neural networks. arXiv preprint arXiv:1606.04671.*
    • Learning to solve complex sequences of tasks–while both leveraging transfer and avoiding catastrophic forgetting–remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. The paper evaluates this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, the paper asserts that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
  • Santoni de Sio, F., & Van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI5(15). https://doi.org/10.3389/frobt.2018.00015*
    • This paper lays the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design,” the paper’s account of meaningful human control is cast in the form of design requirements. It identifies two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation.
  • Sharkey, A. (2019). Autonomous weapons systems, killer robots and human dignity. Ethics and Information Technology21(2), 75-87.*
    • This paper critically examines the relationship between human dignity and Autonomous Weapon Systems (AWS). Three main types of objection to AWS are identified; (i) arguments based on technology and the ability of AWS to conform to international humanitarian law; (ii) deontological arguments based on the need for human judgement and meaningful human control, including arguments based on human dignity; (iii) consequentialist reasons about their effects on global stability and the likelihood of going to war. An account is provided of the claims made about human dignity and AWS, of the criticisms of these claims, and of the several meanings of ‘dignity’. It is concluded that although there are several ways in which AWS can be said to be against human dignity, they are not unique in this respect.
  • Sharkey, N. (2012). Killing made easy: From joysticks to politics. In P. Lin, G. Bekey, and K. Abney (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 111-128). MIT Press.*
    • This chapter provides an overview of novel war technologies, which making killing at a distance easier than ever before. The author argues that the current ethical guidelines the United States government has adopted do not sufficiently address the ethical concerns raised by such technologies. Furthermore, the chapter argues that international ethical guidelines for fully autonomous killer robots are urgently needed.
  • Sharkey, N. (2009). Death strikes from the sky: The calculus of proportionality. IEEE Technology and Society Magazine28(1), 16-19.*
    • The use of unmanned aerial vehicles (UAVs) in the conflict zones of Iraq and Afghanistan for both intelligence gathering and “decapitation” attacks has been heralded as an unprecedented success by U.S. military forces. This article argues that there is a danger of over-trusting and overreaching the technology, particularly with respect to protecting innocents in war zones; there are ethical issues and pitfalls. The article argues that it is time to reassess the meanings of discrimination and proportionality in the deployment of UAVs in 21st century warfare.
  • Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy24(1), 62-77.*
    • This paper considers the ethics of the decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime, arguing that no current answer to this question is ultimately satisfactory. The paper argues that it is a necessary condition for fighting a just war, under the principle of jus in bellum that someone can be justly held responsible for deaths that occur in the course of the war and as this condition cannot be met in relation to deaths caused by an autonomous weapon system, it would therefore be unethical to deploy such systems in warfare.
  • Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.*
    • This paper reports two counterintuitive properties deep neural networks. First, the authors find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, the authors find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend.
  • Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology20(1), 27-40.
    • This article argues that ethical frameworks for AI which consider multiple potentially conflicting factors can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. The article argues that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. The article examines examine existing approaches to multiobjective AI, and identifies how these can contribute to the development of human-aligned intelligent agents.
  • Yudkowsky, E. (2006). Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. M. Cirkovic (Eds.), Global Catastrophic Risks (pp. 308–345). Oxford University Press.*
    • This paper argues that the greatest danger of artificial intelligence is that individuals have a false understanding of it. Specifically, the paper argues that our tendency to anthropomorphize AI limits truly understanding it.

Chapter 19. Troubleshooting AI and Consent (Meg Leta Jones and Elizabeth Edenberg)⬆︎

  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. John Wiley & Sons.
    • This book argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the “New Jim Code,” the author examines how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. The book makes the case for race itself as a kind of technology, designed to stratify and sanctify social injustice in the architecture of everyday life.
  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.*  
    • This book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.
  • Brayne, S. (2017). Big data surveillance: The case of policing. American Sociological Review82(5), 977-1008.
    • This article examines the intersection of two structural developments: the growth of surveillance and the rise of “big data.” Drawing on observations and interviews conducted within the Los Angeles Police Department, the paper offers an empirical account of how the adoption of big data analytics does—and does not—transform police surveillance practices. It argues that the adoption of big data analytics facilitates amplifications of prior surveillance practices and fundamental transformations in surveillance activities.
  • Breen, S., Ouazzane, K., & Patel, P. (2020). GDPR: Is your consent valid? Business Information Review37(1), 19-24.
    • This article explores the philosophical background of consent and examines the circumstances which were the point of departure for the debate on consent and attempts to develop an understanding of it in the context of the growing influence of information systems and the data-driven economy. The article argues that the General Data Protection Regulation (GDPR) has gone further than any other regulation or law to date in developing an understanding of consent to address personal data and privacy concerns.
  • Bridges, K. M. (2017). The poverty of privacy rights. Stanford University Press.*
    • This book argues that poor mothers in America have been deprived of the right to privacy. Presenting a holistic view of how the state intervenes in all facets of poor mothers’ privacy, the author argues that the Constitution has not been interpreted to bestow these women with family, informational, and reproductive privacy rights. The book further argues that until cultural narratives that equate poverty with immorality are disrupted, poor mothers will continue to be denied this right.
  • Broussard, M. (2018) Artificial unintelligence: How computers misunderstand the world. MIT Press.*
    • Making a case against technochauvinism―the belief that technology is always the solution―this book argues that that social problems will not inevitably retreat before a digitally enabled Utopia. The book argues that understanding the fundamental limits of technological capabilities will help the public to make better ethical choices concerning its implementation.
  • Browne, S. (2015). Dark matters: On the surveillance of blackness. Duke University Press.*
    • This book argues that contemporary surveillance technologies and practices are informed by the long history of racial formation and by the methods of policing black life under slavery, such as branding, runaway slave notices, and lantern laws. Placing surveillance studies into conversation with the archive of transatlantic slavery and its afterlife, the book draws from black feminist theory, sociology, and cultural studies. The book asserts that surveillance is both a discursive and material practice that reifies boundaries, borders, and bodies around racial lines, so much so that the surveillance of blackness has long been, and continues to be, a social and political norm. 
  • Ferguson, A. G. (2017) The rise of big data policing: Surveillance, race, and the future of law enforcement. NYU Press.*
    • This book provides an overview of new technologies used in policing and argues for increased public awareness of the consequences of big data surveillance as a law enforcement tool. The book argues that technologies may distort constitutional protections but may also improve police accountability and remedy underlying socio-economic risk factors that encourage crime.
  • Giannopoulou, A. (2020). Algorithmic systems: The consent is in the detail? Internet Policy Review9(1). https://doi.org/10.14763/2020.1.1452
    • This article examines the transformation of consent in order to assess how the concept in itself as well as the applied models of consent can be reconciled to correspond not only to current data protection normative frameworks but also to algorithmic processing technologies. This particularly pressing area of safeguarding a fundamental aspect of individual control over personal data in the algorithmic era is interlinked with practical implementations of consent in the technology used. Moreover, it relates to adopted interpretations of the concept of consent, to the scope of application of personal data, as well as to the obligations enshrined in them.
  • Jesus, V. (2020). Towards an accountable web of personal information: The web-of-receipts. Institute of Electrical and Electronics Engineers Access8, 25383-25394. https://doi.org/10.1109/ACCESS.2020.2970270
    • This paper reviews the current state of consent and tie it to a problem of accountability. The paper argues for a different approach to how the Web of Personal Information operates: the need of an accountable Web in the form of Personal Data Receipts which are able to protect both individuals and organization. 
  • Kim, N. S. (2019). Consentability: Consent and its limits. Cambridge University Press.*
    • This book analyzes the meaning of consent, introduces a consentability framework, and suggests ways to improve the conditions of consent and reduce opportunism. The book considers activities in three different categories. First, self-directed activities; second, activities that have to do with a persons’ bodily integrity; and third, novel procedures or cutting-edge experiments and whether or not people should be allowed to consent to something that’s never been done before where there is little information about potential consequences.
  • Miller, F. G. and Wertheimer, A. (2010). The ethics of consent: Theory and practice. Oxford University Press.*
    • This book assembles the contributions of a distinguished group of scholars concerning the ethics of consent in theory and practice. Part One addresses theoretical perspectives on the nature and moral force of consent, and its relationship to key ethical concepts such as autonomy and paternalism. Part Two examines consent in a broad range of contexts, including sexual relations, contracts, selling organs, political legitimacy, medicine, and research.
  • Müller, A. and Schaber, P. (2018) The Routledge handbook of the ethics of consent. Routledge.*
    • This handbook is divided into five main parts: general questions, normative ethics, legal theory, medical ethics, and political philosophy. This book examines debates and problems in these fields including: the nature and normative importance of consent, paternalism, exploitation and coercion, privacy, sexual consent, consent and criminal law, informed consent, organ donation, clinical research, and consent theory of political obligation and authority.
  • Norval, C., & Henderson, T. (2019). Automating dynamic consent decisions for the processing of social media data in health research. Journal of Empirical Research on Human Research Ethics. https://doi.org/10.1177/1556264619883715
    • This article presents an exploratory user study (n = 67) in which the authors find that they can predict the appropriate flow of health-related social media data with reasonable accuracy, while minimizing undesired data leaks. The article then deconstructs the findings of this study, identifying and discussing a number of real-world implications if such a technique were put into practice.
  • Papadimitriou, S. , Mougiakou, E., and Virvou, M. (2019). Smart educational games and consent under the scope of General Data Protection Regulation. In 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA) (pp. 1-8). Institute of Electrical and Electronics Engineers.
    • This article focuses on General Data Protection Regulation’s principle of personal data processing consent and seeks balance between gaming amusement, educational benefits and regulatory compliance. The article combines legal theory and computer science in order to propose applicable solutions with the form of guidelines towards gaming stakeholders in general as well as educational gaming stakeholders in specific.
  • Pasquale, F. (2018). The black box society. Harvard University Press.*
    • This book exposes how powerful interests abuse secrecy for profit and explains ways to rein them in. It argues that demanding transparency is only the first step toward individuals having control of how big date affects their lives, and that an intelligible society would assure that key decisions of its most important firms are fair, non-discriminatory, and open to criticism.
  • Rule, J. B. (2007). Privacy in peril: How we are sacrificing a fundamental right in exchange for security and convenience. Oxford University Press.*
    • This book examines how personal data made available to virtually any organization for virtually any purpose is apt to surface elsewhere, applied to utterly different purposes. The book argues that as long as individuals willingly accept the pursuit of profit or cutting government costs as sufficient reason for intensified scrutiny over their lives, then privacy will remain endangered.
  • Sawchuk, K. (2019). Private parts: Aging, AI, and the ethics of consent in subscription-based economies. Innovation in Aging3(1). https://doi.org/10.1093/geroni/igz038.082
    • This paper explores Artificial Intelligence (AI) as a technological design offered to assist elder-care based on tracking individual behavior amassed in data bases that are given predictive value through algorithm-identified normative patterns. Drawing ecamples from ethnographic research conducted at the 2019 Consumer Electronics Show, the paper focuses on the ethical dilemmas of privacy, security, consent, and identity in home surveillance systems and financialization of personal data in AI subscription-based services. The paper argues that subscription-based economy exploits older individuals by sharing their lifestyle profiles, health information, economic status, and consumer preferences within powerful corporate networks such as Google and Amazon.
  • Thorstensen, E. (2018, July). Privacy and future consent in smart homes as assisted living technologies. In International Conference on Human Aspects of IT for the Aged Population (pp. 415-433). Springer.
    • With the advent of the General Data Protection Regulative (GDPR), there are clear regulations demanding consent to automated decision-making regarding health. This article opens up some of the possible dilemmas in the intersection between the smart home ambition and the GDPR with specific attention to the possible trade-offs between privacy and well-being through a future case, to the learning goals in a future smart home with health detection systems, and presents different approaches to advance consent.
  • Ytre-Arne, B., & Das, R. (2019). An agenda in the interest of audiences: Facing the challenges of intrusive media technologies. Television & New Media20(2), 184-198.
    • This article formulates a five-point agenda for audience research, drawing on implications arising out of a systematic foresight analysis exercise on the field of audience research, conducted between 2014 and 2017, by the research network Consortium on Emerging Directions in Audience Research (CEDAR). The agenda includes substantial and intellectual priorities concerning intrusive technologies, critical data literacies, labour, co-option, and resistance, and argues for the need for research on these matters, in the interest of audiences.

Chapter 20. Is Human Judgment Necessary? Artificial Intelligence, Algorithmic Governance, and the Law (Norman W. Spaulding)⬆︎

  • Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1-13. https://doi.org/10.1080/1369118X.2016.1216147
    • This article aims to discuss algorithms from the perspective of the social sciences. First, Beer analyzes the issue of social power as it relates to algorithms. Second, Beer focuses on the how notion of an algorithm is conceived.
  • Danaher, J. (2019). The rise of the robots and the crisis of moral patiency. AI & Society, 34(1), 129–136. https://doi.org/10.1007/s00146-017-0773-9
    • Danaher argues that the rise of robots and artificial intelligence is likely to create a crisis of moral patiency, making humans less willing and able to act in the world as moral agents. The consequences of this have dangerous implications for politics and the social world.  
  • Diakopoulos. N. (2015). Algorithmic accountability. Digital Journalism3(3), 398-415.  https://doi.org/10.1080/21670811.2014.976411
    • This article examines algorithmic accountability reporting as a mechanism that has the potential to amplify power structures and biases that computational artifacts perpetuate in society. It uses five cases of algorithmic accountability performance using journalistic reverse engineering strategies to provide insight into method and application in the field of journalism. It assesses transparency models on a broader scale.
  • Epstein, R., Roberts, G., & Beber, G. (Eds.). (2008). Parsing the Turing test: Philosophical and methodological issues in the quest for the thinking computer. Springer.*
    • This edited volume features psychologists, computer scientists, philosophers, and programmers who examine the philosophical and methodological issues surrounding the search for true artificial intelligence. Questions explored include “Will computers and robots ever think and communicate the way humans do?” and “When a computer crosses the threshold into self-consciousness, will it immediately jump into the Internet and create a World Mind?”
  • Finn, E. (2017). What algorithms want: Imagination in the age of computing. The MIT Press.*
    • This book explores how the algorithm has roots in mathematical logic, cybernetics, philosophy, and magical thinking. Finn argues that algorithms use concepts sourced from idealized computation and applies it to a non-ideal reality, yielding unpredictable responses. To address the gap between abstraction and reality, Finn advocates for the creation of a model of “algorithmic reading” and scholarship which considers process.
  • Gunkel, D. (2012). The machine question: Critical perspectives on AI, robots, and ethics. The MIT Press.*
    • Gunkel examines the “machine question” in moral philosophy, which aims to determine whether and to what degree human-made intelligent and autonomous machines can have moral responsibilities and moral consideration. Traditional philosophical notions are challenged by the machine question, as they posit technology as a tool for human uses rather than moral agents.
  • Gunkel, D. (2014). A vindication of the rights of machines. Philosophy & Technology27(1), 113–132. https://doi.org/10.1007/s13347-013-0121-z
    • This article argues that artificial intelligences cannot be excluded from moral consideration, which calls not only for an extension of rights to machines, but an examination into the configuration of moral standing.
  • Haraway, D. J. (1991). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In D.J. Haraway (Ed.), Simians, Cyborgs and Women: The Reinvention of Nature. Routledge.*
    • Haraway’s essay gives a pot-structuralist account of the term “cyborg” as a concept that resists strict categorization, not simply a distinction of “human” from “machine” or “human” from “animal,” but a combination of these concepts.
  • Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication and Society 20(14). https://doi.org/10.1080/1369118X.2016.1154087*
    • This paper synthesizes current literature on algorithms and develops new arguments about their study. This includes the need to focus critical attention on algorithms in light of their increased role in society, how to best understand algorithms conceptually, challenges for researching algorithms, and the differing ways algorithms can be empirically studied.
  • Kraemer, F., van Overveld, K., & Peterson, M. (2010). Is there an ethics of algorithms? Ethics and Information Technology, 13(3), 251-260. https://doi.org/10.1007/s10676-010-9233-7*
    • The authors argue that algorithms can be value-laden, meaning that designers may have justified reasons for creating differential algorithms. To illustrate this claim, the authors use the example of algorithms used in medical analysis, which can be designed differently depending on the priorities of the software designers, such as avoiding false negatives. They go on to contribute guidelines for ethical issues in algorithm design.
  • Lumbreras, S. (2017). The limits of machine ethics. Religions, 8(5). https://doi.org/10.3390/rel8050100
    • Lumbreras provides a framework to classify the methodology employed in the field of machine ethics. The limits of machine ethics are discussed in light of design techniques that only express values imported by the programmer.
  • Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology18(4), 243-256. https://doi.org/10.1007/s10676-015-9367-8
    • Malle discusses the overlap between robot ethics (how humans should design and treat robots) and machine morality (how robots can have morality), arguing ultimately that robots can be designed with human moral characteristics. Malle suggests that morally competent robots can effectively contribute to society in the same way humans can.
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679.
    • As an increasing amount of decisions are coming to be made by algorithms, meaning that gaps between design and actual functioning of algorithms can have serious consequences for individuals and whole societies. This article provides an outline on the debate on the ethics of algorithms and evaluates the current literature to identify topics that need further consideration.
  • Moor, J. H. (Ed.). (2003). The Turing test: The elusive standard of artificial intelligence. Springer.*
    • This book discusses the influence of Alan Turing, including “Computing Machinery and Intelligence,” his pre-eminent article on the philosophy of artificial intelligence, which included a presentation of his famous imitation game. Turing predicted that by the year 2000, the average interrogator would not have a greater than 70% chance of making the correct identification in the imitation game. Using the results of the Loebner 2000 contest, as well as breakthroughs in the field of AI, Moor argues that although there has been much progress, Turing’s prediction has not been borne out.
  • Trausan-Matu, S. (2017). Is it possible to grow an I–Thou relation with an artificial agent? A dialogistic perspective. AI & Society, 34(1), 9-17. https://doi.org/10.1007/s00146-017-0696-5
    • This paper aims to analyze the question of whether it is possible to develop and I-Thou relationship with an artificial conversational agent, discussing possibilities and limitations. Novel perspectives from various disciplines are discussed.
  • Van de Voort, M., Pieters, W., & Consoli, L. (2015). Refining the ethics of computer-made decisions: A classification of moral mediation by ubiquitous machines. Ethics Information Technology, 17(1), 41–56. https://doi.org/10.1007/s10676-015-9360-2*
    • This article investigates computer-made ethical decisions and argues that machines have morality not only when they mediate the actions of humans, but also work to mediate morality itself via decisions within their relationships to human actors. The authors accordingly define four types of moral relations.
  • van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics, 25(3), 719–735. https://doi.org/10.1007/s11948-018-0030-8
    • This article offers a deeper look into the reasons given for developing artificial moral agents (AMAs), arguing the machine ethicists must provide good reasons to build such entities. Until such work is complete, development of AMAs should not continue.
  • Wallach, W. & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford University Press.*
    • Wallach and Allen argue that machines do not use explicit moral reasoning in their decision making, and thus there is a need to create embedded morality as these machines continue to make important decisions. This new field of machine morality or machine ethics will be crucial for designers.
  • Winograd, T. (1990). Thinking machines: Can there be? Are we? In D. Partridge & Y. Wilks (Eds.), The foundations of artificial intelligence: A sourcebook (pp. 167-189). Cambridge University Press.*
    • Winograd explores a view attributed to futurologists, who believe that a new species of thinking machines, machina sapiens, will emerge and become dominant by applying their extreme intelligence to human problems. A critique of this view is that computers cannot possibly accurately replicate human intelligence, because their cold logical programming deprives them of vital features such as creativity, judgement, and genuine intentionality. Winograd argues that although it is true that artificial intelligence has yet to achieve things such as creativity and judgement, it has far more basic shortcomings in this vein, as current machines are unable to display common sense, or basic conversational language skills.   
  • Ziewitz, M. (2016). Governing algorithms: Myth, mess, and methods. Science, Technology, & Human Values, 41(1), 3–16. https://doi.org/10.1177%2F0162243915608948
    • This article aims to provide critical background into the issue of algorithms being viewed as both extremely powerful and difficult to understand. It considers algorithms not only as computational, but also sensitive, and challenges assumptions about agency, transparency, and normativity.

Chapter 21. Sexuality (John Danaher)⬆︎

  • Danaher, J., & McArthur, N. (Eds.). (2017). Robot sex: Social and ethical implications. MIT Press.*
    • This edited volume gathers perspectives from ethics and sociology on the emerging issue of sex with robots. Contributions to the volume define what robot sex is, explore ways in which it can be defended or challenged on ethical grounds, take the perspective of the robot in considering the matter, and reflect on the possibility of robot love. Finally, some contributors articulate visions for the future of robot sex, underlining the importance of evaluating love and intimacy in robot encounters (as opposed to just sex) and emphasizing the impact robot sex will have on society.
  • Danaher, J., Nyholm, S., & Earp, B. (2018). The quantified relationship. The American Journal of Bioethics, 18(2), 3–19.*
    • This article provides a detailed ethical analysis of the Quantified Relationship (QR). The Quantified Self movement, which pursues self-improvement through the tracking and gamification of personal data; the QR applies this to interpersonal, romantic relationships. This article identifies eight core objections to the QR, and counters them by arguing that there are ways in which tracking technologies can be used to support and facilitate good relationships.
  • de Fren, A. (2009). Technofetishism and the uncanny desires of A.S.F.R. (Alt Sex Fetish Robots). Science Fiction Studies, 36(3), 404–440.
    • This article presents a feminist, art-historical analysis of virtual communities that fetishize artificial women. Central to this fetish is the pleasure of ‘hacking’ the system or denaturalizing common understandings of subjecthood and desire. By drawing analogies between the uncanny artificial bodies at the heart of “alt sex fetish robots,” fantasies, and various historical and artistic antecedents, this essay contributes to the critical understanding of mechanical bodies as objects of desire.
  • Devlin, K. (2018). Turned on: Science, sex and robots. Bloomsbury Publishing.*
    • This popular non-fiction book traces the emerging technology of sex robots from robots in Greek myth and the fantastical automata of the Middle Ages through to the sentient machines of the future that inhabit the prominent AI debate. Devlin compares the ‘modern’ robot to the robot servants in twentieth-century science fiction and offers a historical perspective on the psychological effects of the technology as well as the issues it raises around gender politics, diversity, surveillance and violence.
  • Draude, C. (2011). Intermediaries: Reflections on virtual humans, gender, and the uncanny valley. AI & Society, 26, 319–327.
    • This article provides an analysis of the uncanny valley effect from a cultural and gender studies perspective. The uncanny valley effect describes the eeriness and lack of believability of anthropomorphic artefacts that resemble the ‘real’ thing too strongly. This article offers a gender-critical reading of computer theory by analyzing a classic story of user and artifact (E.T.A. Hoffman’s narration of Olimpia), ultimately arguing for more diverse artefact production.
  • Evans, D. (2010). Wanting the impossible: The dilemma at the heart of intimate human-robot relationships. In Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues (pp. 75–88). John Benjamins Publishing.
    • This chapter makes a philosophical case against the claim that romantic relationships with robots will be more satisfying because robots can be made to conform to the human’s wishes. Evans’ dismissal of this thesis does not rest on any technical limitation in robot building but is instead rooted in a thought experiment comparing two different kinds of partner robots: one capable of rejecting its owner and one which is not.
  • Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25, 305–323.*
    • This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. Frank and Nyholm present and analyze reasons to answer “yes” or “no” to these questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics, the relationship between consent and free will, and the relationship between consent and consciousness.
  • Gutiu, S. (2016). The robotization of consent. In R. Calo, M. Froomkin, & I. Kerr (Eds.), Robot law (pp. 186–212). Edward Elgar Publishing.
    • This chapter explains how sex robots can impact existing gender inequalities and the understanding of consent in sexual interactions between humans. Sex robots are defined by the irrelevancy of consent, replicating existing gender imbalances in emulating and eroticizing female sexual slavery. This chapter discusses the documented harms of extreme pornography, the expected harms of sexbots, connecting these to the legal concepts of harm under Canadian and U.S. legal systems.
  • Halberstam, J. (2008). Animating revolt/revolting animation: Penguin love, doll sex and the spectacle of the queer nonhuman. In M. Hird & N. Giffney (Eds.), Queering the non/human. Taylor & Francis.
    • This chapter applies a queer theory approach to sex robots, suggesting that new forms of animation – from transgenic mice to female cyborgs and Tamagotchi toys – productively shift the terms and the meaning of the artificial boundaries between humans, animals, machines, states of life and death, animation and reanimation, living, evolving, becoming and transforming. Halberstam brings to the surface the interdependence of reproductive and non-reproductive communities.
  • Hauskeller, M. (2014). Sex and the posthuman condition. Palgrave McMillan.
    • This book looks at how sexuality is framed in enhancement scenarios and how descriptions of the resulting posthuman future are informed by mythological, historical and literary paradigms. It examines the glorious sex life humans will allegedly enjoy due to greater control of our emotions, improved capacity for pleasure, and availability of sex robots.
  • Kubes, T. (2019). New materialist perspectives on sex robots: A feminist dystopia/utopia? Social Sciences, 8(8), 224.
    • This article re-evaluates feminist critiques of sex robots from a new materialist perspective, suggesting that sex robots may not be an exponentiation of hegemonic masculinity to the extent that the technology can be queered. When the beaten tracks of pornographic mimicry are left behind, sex robots may in fact enable new liberated forms of sexual pleasure beyond fixed normalizations, thus contributing to a sex-positive utopian future.
  • Lee, J. (2017). Sex robots: The future of desire. Palgrave Macmillan.
    • This book thinks through the sex robot beyond the human/non-human binary, arguing that non-human sexuality has been at the heart of culture throughout history. Taking a philosophical approach to what the sex robot represents and signifies, this book discusses the roots, possibilities, and implications of the not-so-new desire for sex robots.
  • Levy, D. (2009). Love and sex with robots: The evolution of human-robot relationships. Gerald Duckworth & Company.*
    • This popular non-fiction book consists of two parts, one concerning love with robots and the other concerning sex with robots. Using a range of examples, Levy argues that the ability to feel affection for animate creations is long underway, making physical intimacy a logical next step. Moving from love to sex rather than the other way, this book makes the case that even entities that were once deemed cold and mechanical can soon become the objects of real, human desire.
  • Levy, K. (2014). Intimate surveillance. Idaho Law Review, 51(3), 679–693.*
    • This article considers how new technical capabilities, social norms, and cultural framework are beginning to change the nature of intimate monitoring practices. Focused on practices occurring on an interpersonal level, i.e. in an intimate relationship with two partners, the article examines the relations between data collection, values, and privacy from dating and sex to fertility, fidelity, and finally, abuse. Levy closes with reflections on the role of law and policy in the emerging domain of intimate (self)surveillance.
  • Lieberman, H. (2017). Buzz: The stimulating history of the sex toy. Pegasus Books.*
    • This popular non-fiction book focuses on the history of sex toys from the 1950s to the present, tracing how once taboo devices reached the cultural mainstream. This historical account moves from sex toys as symbols of female emancipation and tools in the fight against HIV/AIDS to consumerist marital aids and, finally, to mainstays in popular culture.
  • Lupton, D. (2014). Quantified sex: A critical analysis of sexual and reproductive self-tracking using apps. Culture, Health & Sexuality, 17(4), 440–453.*
    • This article presents a critical analysis of computer apps used to self-track features of users’ sexual and reproductive activities and functions. The analysis reveals that such apps represent sexuality and reproduction in certain defined and limited ways that work to perpetuate normative stereotypes and assumptions about women and men as sexual and reproductive subjects, and exposes issues concerning privacy, data security and the use of the data collected by these apps. Lupton suggests ways to ‘queer’ self-tracking technologies in response to these issues.
  • McArthur, N., & Twist, M. (2017). The rise of digisexuality: Therapeutic challenges and possibilities. Sexual and Relationship Therapy, 32(3–4), 334–344.*
    • This article argues that clinicians in the psychological setting should be prepared to work with ‘digisexuals’: people whose primary sexual identity comes through the use of radical new sexual technologies. Guidelines for helping individuals and relational systems make informed choices regarding participation in technology-based activities of any kind, let alone ones of a sexual nature, are few and far between. This article articulates a framework for understanding the nature of digisexuality and how to approach it.
  • Mindell, D. (2015). Our robots, ourselves: Robotics and the myths of autonomy. Viking.
    • Departing from the future tense that is common in conversations about robots, this book investigates the most advanced robotics that currently exist. Deployed in high atmosphere, deep ocean, and outer space, these robotic applications show that the stark lines between human and not human, or manual and automated, are not helpful. This book clarifies misconceptions about the autonomous robot to talk about the human presence at the center of the technological landscape.
  • Richardson, K. (2020). Sex robots: The end of love. Polity Press.
    • This book is an anthropological critique of sex robots, here taken up as points of insight into how women and girls are imagined and how porn, prostitution, and the sexual exploitation of children drive the desire for them. Richardson argues that sex robots are produced within a framework of ‘property relations,’ in which egocentric Man (and his disconnection from Woman) shapes the building of robots and AI. This makes sex robots a major threat to the possibility of love and connection.
  • van Oost, E. (193 C.E.). Materialized gender: How shavers configure the users’ femininity and masculinity. In N. Oudshoorn & T. Pinch (Eds.), How Users Matter: The Co-Construction of Users and Technologies. MIT Press.
    • This chapter is part of an edited volume that examines how users shape technology from design to implementation. Van Oost uses the case study of shaving devices marketed to men or women to show design trajectories use “gender scripts”: particular representations of the male and female consumer that become inscribed in the design of the artefacts. Her analysis suggests that technical competence is inscribed in artefacts marketed to men, while products targeting women inscribe disinterest in technology on their user. 
  • Verbeek, P.-P. (2005). Artifacts and attachment: A post-script philosophy of mediation. In H. Harbers (Ed.), Inside the politics of technology: Agency and normativity in the co-production of technology and society (pp. 125–146). Amsterdam University Press.
    • This chapter uses Bruno Latour’s theory of technological mediation to explain how technologies foster attachment on the part of their users. For attachment to occur, artefacts should be present in an engaging way, stimulating users to participate in their functioning. Attachment always involves the materiality of the artefact more than its functioning, meaning that users also develop a bond with the machinery and material operation of artefacts.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
    • This book defines ‘surveillance capitalism’ as a novel market form and a specific logic of capitalist accumulation. If industrial capitalism exploits nature, surveillance capitalism exploits human nature through the installation of a global architecture of computer mediation that Zuboff calls “Big Other.” Through these architectures’ hidden mechanisms of extraction, commodification, and control, surveillance capitalism erodes the human potential for self-determination, threatening to core values such as freedom, democracy, and privacy.

IV. Perspectives & Approaches

Chapter 22. Perspectives on Ethics of AI: Computer Science (Benjamin Kuipers)⬆︎

  • Lin, P., K. Abney, K. & G. Bekey (Eds.). (2012). Robot ethics: The ethical and social implications of robotics. MIT Press.*
    • Starting with an overview of the issues and relevant ethical theories, the topics flow naturally from the possibility of programming robot ethics to the ethical use of military robots in war to legal and policy questions, including liability and privacy concerns. The book ends by examining the question of whether or not robots should be given moral consideration.
  • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., & Rahwan, I. (2018). The moral machine experiment. Nature, 563(7729), 59-64.
    • This article aims to address concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide these machines. The authors utilize the Moral Machine, an online experimental platform, to gather data which is analyzed to come to a recommendation as to how machine decision making should be determined.
  • Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
    • In their research, Bonnefon et al. found that even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles. This creates a dilemma between creation of a utilitarian algorithm or defining alternative safer algorithms that will guide the decision making of AVs.
  • Flanagan, O. (2016). The geography of morals: Varieties of moral possibility. Oxford University Press.
    • This book uses comprehensive dialogue between cultural and psychological anthropology, empirical moral psychology, and behavioral economics with the aim of presenting and exploring cross-cultural and world philosophy.
  • Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and machines, 14(3), 349-379.
    • This paper first presents a concept of agency of Artificial Agents, and then explores the subsequent concerns raised surrounding the morality and responsibility of said agents. The authors argue that there is substantial and important scope for the concept of an Artificial moral agent not necessarily exhibiting free will, mental states or responsibility.
  • Gibbs, J. C. (2019). Moral development and reality: Beyond the theories of Kohlberg, Hoffman, and Haidt. Oxford University Press.
    • In this text, Gibbs presents and argues for a new view of lifespan socio-moral development based on his exploration of moral identity and other variables that account for prosocial behavior.
  • Greene, J. D. (2013). Moral tribes: Emotion, reason, and the gap between us and them. Penguin.*
    • This book explores how our evolutionary nature that dictates a select group of others (Us) and seeks to fight off everyone else (Them) can coexist with our modern conditions of shared space that result in the moral lines that divide us becoming more salient and more puzzling.
  • Gulati, S., Sousa, S., & Lamas, D. (2019). Design, development and evaluation of a human-computer trust scale. Behaviour & Information Technology, 38(10), 1004-1015.
    • This paper argues that as more tasks are delegated to intelligent systems and user interactions these systems become increasingly complex, there must be a metric by which to quantify the amount of trust that a user is willing to place on such systems. The authors then present their own multi-dimensional scale to assess user trust in HCI.
  • Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage.*
    • In this text, the author draws on research on moral psychology to argue that moral judgments arise not from reason but from gut feelings. Thus, given that different groups have different intuitions about right and wrong, this creates polarization within a population.
  • Jackson, P. C. (2019). Toward human-level artificial intelligence: Representation and computation of meaning in natural language. Courier Dover Publications.
    • This book explores the potentiality of creating a human-level artificial intelligence. The author proposes an approach called TalaMind that involves developing an AI system that uses a ‘natural language of thought’ based on the unconstrained syntax of a language such as English. Finally, the book evaluates the beneficial potential of human-like AI, its potential contributions to society, and why it may be necessary.
  • Kuipers, B. (2018). How can we trust a robot? Communications of the ACM, 61(3), 86-95.*
    • This paper explores the ways robots have and will integrate into society and evaluates the necessity for trust in those human-robot interactions. Moreover, this trust can be manufactured if robots are created such that they follow the social norms of human society.
  • Pinker, S. (2018). Enlightenment now: The case for reason, science, humanism, and progress. Penguin.*
    • Citing data that tracks social progress, Pinker argues that reason and science can enhance human flourishing and reliance on these logical and scientific principles is required in order to continue the trajectory of increasing health, prosperity, safety, peace, knowledge, and happiness.
  • Singer, P. (2011). The expanding circle: Ethics, evolution, and moral progress. Princeton University Press.*
    • Drawing from the fields of philosophy and evolutionary psychology, Singer argues in this book that although altruism began as a genetically based drive to protect one’s kin and community members, it is not solely dictated by biology. Rather, altruism and by extension human ethics has developed as a result of our capacity for reasoning that leads to conscious ethical choices with an expanding circle of moral concern.
  • Tomasello, M. (2016). A natural history of human morality. Harvard University Press.*
    • This text presents an account of the evolution of human moral psychology based on analysis and comparison of experimental data comparing great apes and human children. Tomasello presents an argument for our development based on two key evolutionary steps: the move towards collaboration, and the emergence of distinct cultural groups. 
  • Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.
    • This book aims to apply classical philosophical traditions of virtue ethics to challenges of a global technological society. The author argues that a moral framework based in virtue ethics represents the ideal guiding principles for contemporary society.
  • van der Woerdt, S., & Haselager, P. (2016). Lack of effort or lack of ability? Robot failures and human perception of agency and responsibility. Benelux Conference on Artificial Intelligence, pp. 155-168.
    • This study explores how considering an agent’s actions as related to either effort or ability can have important consequences for attributions of responsibility. The study concludes that a robot displaying lack of effort significantly increases human attributions of agency and –to some extent- moral responsibility to the robot.
  • Vanderelst, D., & Winfield, A. (2018). The dark side of ethical robots. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 317-322).
    • This paper argues that the recent focus on building ethical robots also inevitably enables the construction of unethical robots, as the cognitive machinery utilized to make an ethical robot can be easily corrupted. In the face of these risks, the authors advocate for a hesitancy in embedding ethical decision making in real-world safety-critical robots.
  • Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press.*
    • This book explores the problem of software governing autonomous systems being “ethically blind” in the sense that the decision‐making capabilities of such systems does not involve any explicit moral reasoning. The authors explore the necessity for robots to become capable of factoring ethical and moral considerations into their decision making as well as potential routes to achieve this.
  • Wright, R. (2000). Nonzero: The logic of human destiny. Pantheon.*
    • In this book, Wright employs game theory and the logic of “zero-sum” and “non-zero-sum” games to argue against the conventional understanding that evolution and human history were aimless, presenting his view that evolution pushed humanity towards social and cultural complexity.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power.  Public Affairs, New York.*
    • This book defines surveillance capitalism as the quest by powerful corporations to predict and control our behavior. It then argues that the total certainty for maximum profit promised by surveillance capitalism comes at the expense of democracy, freedom, and our human future.

Chapter 23. Social Failure Modes in Technology and the Ethics of AI: An Engineering Perspective (Jason Millar)⬆︎

  • Akrich, M. (1992). The de-scription of technical objects. In W.E. Bijker and J. Law (Eds.), Shaping Technology/Building Society (pp. 205-224).  MIT Press.
    • Akrich outlines how technical objects simultaneously embody and measure a set of relations between humans and non-humans and how they may generate both forms of knowledge and moral judgments.  Akrich argues that technical objects have the ability to script or prescribe behavior. 
  • Bicchieri, C. (2006). The Grammar of Society: The nature and dynamics of social norms. Cambridge University Press.
    • The Grammar of Society examines social norms, such as fairness, cooperation, and reciprocity, in an effort to understand their nature and dynamics, the expectations that they generate, and how they evolve and change.  This book provides a definition of social norms which in turn enables Millar to investigate what it means for a social norm to be designed into an artifact. 
  • Bijker, W. E., Hughes, T. P., & Pinch, T. J. (Eds.). (1987). The social construction of technological systems: New directions in the sociology and history of technology. MIT press.
    • The Social Construction of Technological Systems introduced a new method of inquiry—social construction of technology, or SCOT—that became a key part of the wider discipline of science and technology studies. Essays in this book tell stories about such varied technologies as thirteenth-century galleys, eighteenth-century cooking stoves, and twentieth-century missile systems. This book approaches the study of technology by giving equal weight to technical, social, economic, and political questions, and demonstrates the effects of the integration of empirics and theory.
  • Calo, R., Froomkin, A. M., & Kerr, I. (Eds.). (2016). Robot law. Edward Elgar Publishing.*
    • Robot Law collects papers by a diverse group of scholars focused on the larger consequences of the increasingly discernible future of robotics. It explores the increasing sophistication of robots and their widespread deployment into hospitals, public spaces, and battlefields. The book also explores how this requires rethinking of a wide variety of philosophical and public policy issues, including how this technology interacts with existing legal regimes.
  • Chiu, M., Harrysson, M., Manyika, J., Roberts, R., Chung, R., Nel, P., & van Hetern, A. (2018, November). Applying artificial intelligence for social good. McKinsey Global Institute.  https://www.mckinsey.com/featured-insights/artificial-intelligence/applying-artificial-intelligence-for-social-good
    • McKinsey & Company’s discussion paper covering the issues around AI. It offers a detailed analysis of how AI is not a silver bullet, but it could help tackle some of the world’s most challenging social problems. Topics discussed include: mapping AI cases to domains of social good; AI capabilities that can be used for social good; overcoming bottlenecks and identifying risks to be managed; and scaling up the use of AI for social good.
  • Eadicicco, L. I. S. A., Peckham, M., Pullen, J. P., & Fitzpatrick, A. (2017). The 20 most successful technology failures of all time. Time Magazine. Online: http://time.com/4704250/most-successful-technology-tech-failures-gadgets-flops-bombs-fails/
    • This is a list of failures that led to success or may yet still lead to something world-changing, hence the labeling of the items on the list as technology’s most successful failed products. Like an experiment gone awry, they can still teach us something about technology and how people want to use it.
  • Evans, R., & Collins, H. M. (2007). Rethinking expertise. University of Chicago Press.*
    • Rethinking Expertise offers a new perspective on the role of expertise in the practice of science and the public evaluation of technology. It asks the question: how can the public make use of science and technology before there is consensus in the scientific community? A Periodic Table of Expertises based on the idea of tacit knowledge—knowledge that we have but cannot explain is offered in order to determine how some expertises are used to judge others, how laypeople judge between experts, and how credentials are used to evaluate them.
  • Friedman, B., & Kahn, P.H.Jr. (2003). Human values, ethics, and design. In The human-computer interaction handbook (pp. 1177–1201). CRC Press. * https://depts.washington.edu/hints/publications/Human_Values_Ethics_Design.pdf
    • This article reviews how the field of human-computer interaction (HCI) has addressed the following topics: how values become implicated in technological design; distinguishing usability from human values with ethical import; and review of the major HCI approaches to key human values relevant for design and special ethical responsibilities of HCI professionals.
  • Hvistendahl, M. (2017). Inside China’s vast new experiment in social ranking. WIRED. https://www.wired.com/story/age-of-social-credit/
    • This article delves into how China is taking the idea of a credit score to the extreme.  By using big data to track and rank what its citizens do—including purchases, pastimes, and mistakes—China is able to take its practice of social engineering to a new level in the 21st century. In order to illustrate the impact of China’s use of technology on individual lives, Hvistendahl provides a detailed account of her and her friend’s experiences of living within this system over a period of several years.
  • Kleeman, S. (2016). Here are the Microsoft Twitterbot’s craziest racist rants. Gizmodo. https://gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160
    • This article reviews Microsoft Twitterbot Tay’s racist rants in order to drive home the lesson learned that the development of AI needs to ensure that it incorporates social and cultural impacts of the technology before being deployed prematurely. 
  • Latour, B., (1992). Where are the missing masses? The sociology of a few mundane artifacts.  In W.E. Bijker & J. Law (Eds.), Shaping Technology/Building Society (pp. 225-258).  MIT Press.
    • Bruno Latour explores how artifacts can be deliberately designed to both replace human action and constrain and shape the actions of other humans. His study demonstrates how people can ‘‘act at a distance’’ through the technologies they create and implement and how, from a user’s perspective, a technology can appear to determine or compel certain actions. Latour argues that we cannot understand how societies work without an understanding of how technologies shape our every-day lives.
  • Latour, B. (1999). Pandora’s hope: essays on the reality of science studies. Harvard University Press.*
    • Pandora’s Hope is a collection of essays that investigate the relationship between humans and natural artifactual objects.  This book offers an argument for understanding the reality of science in practical terms.  Through case studies in the world of technology, Latour shows how the material and human world come together and are reciprocally transformed into items of scientific knowledge. 
  • Lin, P., Jenkins, R., Abney, K., Bekey, G.A., Eds. (2017). Robot ethics 2.0. Oxford University Press.*
    • Robot Ethics 2.0 studies the ethical, legal, and policy impacts of robots which have been taking on morally important human tasks and decisions as well as creating new risks. This book focuses on issues related to autonomous cars as an important case study that cuts across diverse issues including psychological, legal, trust, physical, etc…
  • Metz, R. (2015). Google glass is dead; Long live smart glasses. Technology Review118(1), 79-82. 
    • This article argues that although Google’s head-worn computer is going nowhere, the technology is sure to march on because intriguing possibilities remain.  It evaluates the reasons for Google glass’s failure and investigates some potential uses for a smart glass device including serving as a memory aid and productivity enhancer.
  • Pearson, C., & Delatte, N. (2006). Collapse of the Quebec bridge, 1907. Journal of Performance of Constructed Facilities20(1), 84-91.
    • Collapse of the Quebec Bridge describes the grave implications of the failure of man-made artifacts as a result of physical defects not fully accounted for in their design.  This article outlines the collapse of the Quebec Bridge over the St. Lawrence River in 1907 where seventy-five workers were killed.  It discusses the investigation of the disaster and the finding that the main cause of the bridge’s failure was improper design by the consulting engineer. 
  • Pogue, D. (2013). Why Google Glass is creepy.  Scientific American. https://www.scientificamerican.com/article/why-google-glass-is-creepy/
    • Scientific American article that outlines the biggest obstacle to social acceptance of the new technology:  the smugness of people who wear Google Glass and the deep discomfort of everyone who does not. It drives home the message that even though wearable computer glasses let you record everything you see, good luck finding someone to talk to!
  • van den Hoven, J., Doorn, N., Swierstra, T., Koops, B.-J., Romijn, H., Eds. (2014). Responsible innovation 1: Innovative solutions for global issues. Springer.*
    • Responsible Innovation 1 addresses the methodological issues involved in responsible innovation and provides an overview of recent applications of multidisciplinary research involving close collaboration between researchers in diverse fields such as ethics, social sciences, law, economics, applied science and engineering. This book delves into the ethical and societal aspects of new technologies and changes in technological systems.
  • Verbeek, P. P. (2006). Materializing morality: Design ethics and technological mediation. Science, Technology, & Human Values31(3), 361-380.
    • This article deploys the “script” concept, indicating how technologies prescribe human actions, in a normative setting.  This article explores the implications of the insight that engineers materialize morality by designing technologies that co-shape human actions.  The article augments the script concept by developing the notion of technological mediation and its impact on the design process and design ethics. 
  • Vincent, J. (2016). Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. The Verge. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
    • The Verge article outlines how it took less than 24 hours for Twitter to corrupt an innocent AI chatbot named Tay. Tay, by being a robot parrot with an internet connection starting repeating people’s misogynistic, racist and Donald Trump-like remarks back to users. This article raises serious questions about AI embodying the prejudices of society.
  • Winner, L. (2010). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.
    • The Whale and the Reactor poses questions about the relationship between technical change and political power and explores the political, social and philosophical implications of technology.   This book demonstrates that technical decisions are political decisions, and they involve profound choices about power, liberty, order, and justice. 
  • Zeeberg, A. (2020, January). What we can learn about robots from Japan. BBC.  https://www.bbc.com/future/article/20191220-what-we-can-learn-about-robots-from-japan
    • This article discusses the contrast between the philosophical traditions of the West and the Japanese Shinto-based philosophical view that makes no categorical distinction between humans, animals, and objects such as robots. This contrast demonstrates that while the west tends to see robots and artificial intelligence as a threat, Japan’s view has led to its complex relationship with machines including a positive view of technology that is rooted in Japan’s socioeconomic, historical, religious and philosophical perspectives. 

Chapter 24. A Human-Centred Approach to AI Ethics: A Perspective from Cognitive Science (Ron Chrisley)⬆︎

  • Alaieri, F., & Vellino, A. (2016). Ethical decision making in robots: Autonomy, trust and responsibility. In International conference on social robotics (pp. 159-168). Springer. https://doi.org/10.1007/978-3-319-47437-3_16
    • The authors argue that in order to get people to trust autonomous robots, the ethical principles employed by these autonomous robots must be made transparent.
  • Aroyo, A. M., Rea, F., Sandini, G., & Sciutti, A. (2018). Trust and social engineering in human robot interaction: Will a robot make you disclose sensitive information, conform to its recommendations or gamble? IEEE Robotics and Automation Letters3(4), 3701-3708. https://doi.org/10.1109/LRA.2018.2856272
    • This research study examines how robots could be used for social engineering. The researchers found that people do build trust with robots, which can lead to the voluntary disclosure of private information.
  • Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition181, 21-34. https://doi.org/10.1016/j.cognition.2018.08.003
    • This article examines nine studies that suggest that humans do not want autonomous machines to make moral decisions. Bigman and Gray argue that this aversion to machine moral decision making will prove challenging to eliminate as designers seek to employ machines in medicine, law, transportation, and defence.
  • Broadbent, E. (2017). Interactions with robots: The truths we reveal about ourselves. Annual Review of Psychology, 68, 627-652. https://doi.org/10.1146/annurev-psych-010416-043958
    • This article examines human-robot relations from the perspective of cognitive science. Broadbent argues that there is a need to study human feelings towards robots and argues that this study will reveal insights into human psychology, such as human tendency to have an uncanny feeling towards robotic machines.
  • de Graaf, M. M. A. (2016). An ethical evaluation of human–robot relationships. International Journal of Social Robotics, 8(4), 589-598. https://doi.org/10.1007/s12369-016-0368-5
    • De Graff discusses the ethical considerations of human-robot relationships, in light if and how these relationships could contribute to the good life and argues that research of human social interaction with robots is needed to flesh out ethical, societal and legal perspectives, and to design and introduce responsible robots.
  • Fossa, F. (2018). Artificial moral agents: Moral mentors or sensible tools? Ethics and Information Technology20(2), 115-126. https://doi.org/10.1007/s10676-018-9451-y
    • This paper analyzes how the concept of an artificial moral agent (AMA) impacts human self-understanding of themselves as moral agents. Fosse presents the Continuity Approach and contrary Discontinuity approach. The Continuity Approach argues that AMAs and humans should be considered homogenous moral entities. The Discontinuity Approach argues that there is an important essential difference between humans and AMAs. Fosse argues that the Discontinuity Approach better encapsulates the definition of AMAs, how we should deal with the moral tensions they cause, and the difference between machine ethics and moral philosophy.
  • Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., & Ivaldi, S. (2016). Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Computers in Human Behavior, 61, 633-655. https://doi.org/10.1016/j.chb.2016.03.057
    • The author presents an experiment between 56 participants and a robot called iCub, which investigated whether trust in a robot’s function was a prerequisite for social acceptance and to what extent social features like participant desire to control affected trust in iCub. The study found that participants were more likely to agree with iCub’s decisions in functional tasks rather than social ones. They conclude that functional ability is not a pre-requisite for trust in social ability.
  • Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In 10th ACM/IEEE International Conference on Human-Robot Interaction (pp. 117-124).*
    • The authors argue that explicit ethical mechanisms must be incorporated as autonomous robots will inevitably end up in situations wherein an ethical choice must be made. They outline several requirements for these ethical mechanisms.
  • Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology18(4), 243-256. https://doi.org/10.1007/s10676-015-9367-8
    • This article examines the connection between robot ethics and machine morality, arguing that robots can be designed with moral characteristics similar to those of humans. Consequently, these robots can contribute to society as ethically competent humans do.
  • Malle, B. F., & Scheutz, M. (2019). Learning how to behave. In O. Bendel (Ed.), Handbuch Maschinenethik (pp. 255-278). Springer. https://doi.org/10.1007/978-3-658-17483-5_17
    • Malle and Scheutz present a framework for developing robotic moral competence, composed of five features: two constituents (moral norms and moral vocabulary), and three activities (moral judgement, moral action and moral communication).
  • Moor, James. (2009). Four kinds of ethical robots. Philosophy Now, 72(12), 12-14.*
    • Moor argues that there are at least four distinct types of ethical robots. First, ethical impact agents, which perform actions that have ethical consequences regardless of the machine’s intention. Second, implicit ethical agents are designed to have built in ethical actions. Third, explicit ethical agents can make ethical determinations themselves. Fourth, full ethical agents can make ethical determinations, but also have features associated with human ethical agents, including consciousness, intentionality, and free-will.
  • Sarathy, V., Scheutz, M., & Malle, B. F. (2017). Learning behavioral norms in uncertain and changing contexts. In 8th IEEE International Conference on Cognitive Infocommunications (pp. 301-306).
    • This article presents the problem of presenting norms to algorithms, in light of the fact that humans are often uncertain and vague when it comes to moral norms. Using deontic logic, Dempster Shafer Theory, and a machine learning algorithm that teaches and AI norms using uncertain human data, the authors demonstrate a novel capacity for AIs to learn about morality, using context clues to provide nuance.
  • Scheutz, M., & Malle, B. F. (2014). “Think and do the right thing”—A Plea for morally competent autonomous robots. In 2014 IEEE international symposium on ethics in science, technology and engineering, (pp. 1-4).*
    • Scheutz and Malle argue that it is vital to incorporate explicit ethical mechanisms that enable moral virtue in autonomous robots, in light of their frequent use in ethically charged scenarios.
  • Scheutz, M., Malle, B., & Briggs, G. (2015). Towards morally sensitive action selection for autonomous social robots. In 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (pp. 492-497). https://doi.org/10.1109/ROMAN.2015.7333661
    • The authors argue that autonomous social robots must be taught to anticipate norm violations and seek to prevent them. If such situations cannot be prevented in a given context, robots must be able to justify their action. The authors present an action execution system as a potential solution to this problem.
  • Scheutz, M. (2017). The case for explicit ethical agents. AI Magazine38(4), 57-64. https://doi.org/10.1609/aimag.v38i4.2746
    • Scheultz presents his case for the development of what Moor calls explicit ethical agents. He argues that although machine ethics is a growing field, more work needs to be done to create cognitive architecture that can judge situations based on morality, for both humans and robots.
  • Stange, S., & Kopp, S. (2020). Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 619-627). https://doi.org/10.1145/3319502.3374802
    • This paper investigates whether or not a robot’s ability to self-explain its own behaviour affects user perception of that behaviour. Stange and Kopp found that all types of explanation strategies increased understanding and acceptance of robot behaviour.
  • Tavani, H. T. (2018). Can social robots qualify for moral consideration? Reframing the question about robot rights. Information9(4), 73. https://doi.org/10.3390/info9040073
    • Tavani suggest that current debates on whether robots can have rights are limited because they do not explicitly define what robots would equality and what specific rights are at stake. She argues that the question of whether robots should have rights should be framed as asking whether some social robots qualify for moral consideration as moral patients. Tavani argues that they should.
  • Torrence, S., & Chrisley, R. Modelling consciousness-dependent expertise in machine medical moral agents. (2015). In P. van Rysewyk & M. Pontier (Eds.), Machine medical ethics (pp. 291-316). Springer International Publishing.*
    • This article examines the limitations of current AI designs, stating that current models for medical AI systems fail to account machine consciousness, thereby limiting their ethical functionality. The authors argue machine consciousness plays a vital role in moral decision making, and thus it would be prudent for AI designers to think about consciousness when creating these machines.
  • van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and engineering ethics25(3), 719-735. https://doi.org/10.1007/s11948-018-0030-8
    • This article examines issues relating to the development artificial moral agents (AMAs), and argues that ethicists have yet to provide good arguments for the development of such machines. The authors argue that the, development of AMAs should not continue until such arguments are given.
  • Ziemke, T. (2008). On the role of emotion in biological and robotic autonomy. BioSystems91(2), 401-408. https://doi.org/10.1016/j.biosystems.2007.05.015
    • This article discusses the difference between autonomy of biological beings and autonomy of robots from the perspective of cognitive science.

Chapter 25. Integrating Ethical Values and Economic Value to Steer Progress in Artificial Intelligence (Anton Korinek)⬆︎

  • Acemoglu, D., & Restrepo, P. (2019). The wrong kind of AI? Artificial intelligence and the future of labor demand. NBER WorkingPaper w25682.*
    • This paper argues that recent technological change has been biased towards automation, with insufficient focus on creating new tasks where labor can be productively employed. The paper suggests that consequences of this choice have been stagnating labor demand, declining labor share in national income, rising inequality, and lower productivity growth. The paper argues that the current tendency to develop AI in the direction of further automation could lead to missing out on the promise of the “right” kind of AI with better economic and social outcomes.
  • Adachi, H., Inagaki, K., Nakamura, T., & Osumi, Y. (2019). Technological progress, income distribution, and unemployment:Theory and empirics. Springer.
    • This volume develops original methods of analyzing biased technological progress in the theory and empirics of economic growth and income distribution. This volume analyzes the effects of factor-biased technological progress on growth and income distribution and shows that long-run trends of the capital-income ratio and capital share of income consistent with Piketty’s 2014 empirical results emerge. Applying a new econometric method to Japanese industrial data, the authors test the key assumptions employed and important results derived in the theoretical part of this book.
  • Agrawal, A., Gans, J., & Goldfarb, A. (2019). Economic policy for artificial intelligence. Innovation Policy and the Economy19(1), 139-159.
    • This article argues that policy will influence the impact of artificial intelligence on society in two key dimensions: diffusion and consequences. First, in addition to subsidies and intellectual property (IP) policy that will influence the diffusion of AI in ways similar to their effect on other technologies, the article presents three policy categories—privacy, trade, and liability—as uniquely salient in their influence on the diffusion patterns of AI. Second, the article suggests labor and antitrust policies will influence the consequences of AI in terms of employment, inequality, and competition.
  • Antal, M. (2018). Post-growth strategies can be more feasible than techno-fixes: Focus on working time. The Anthropocene Review5(3), 230-236.
    • This article argues that as negative-emission technologies and solar geoengineering are risky, social and economic innovations are needed as well. The article makes the case for working time reduction as neglected strategy that needs urgent attention in climate–economy models and policy.
  • Arduengo, M., & Sentis, L. (2018). Robot economy: Ready or not, here it comes. arXiv preprint arXiv:1812.01755.
    • Starting from the premise that society is at a technological inflection point in which robots are developing the capacity to do greatly increase their cognitive and physical capabilities, the authors consider the question of robots directly participating in some economic activities as autonomous agents. The paper outlines a technological framework describing a robot economy and considers the challenges it might represent in the current socio-economic scenario.
  • Autor, D. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives 29(3), 3–30.*
    • This article argues that the polarization of the labor market is unlikely to continue very far into future, reflecting on how recent and future advances in artificial intelligence and robotics should shape our thinking about the likely trajectory of occupational change and employment growth. It argues that the interplay between machine and human comparative advantage allows computers to substitute for workers in performing routine, codifiable tasks while amplifying the comparative advantage of workers in supplying problem-solving skills, adaptability, and creativity.
  • Bhattacharjee, A., & Dymski, G. (2019). Do the robots come to liberate us or to deepen our inequality? The uncertain macrostructural foundations of the robotic age. In L. P. Rochon, & V. Monvoisin (Eds.), Finance, Growth and Inequality. Edward Elgar Publishing.
    • This article explores the various strands of the theory of the monetary circuit to see money’s role in a monetary production economy. Then, the article looks at the concerns raised by endogenous money creation on the determination of the overnight interest rate, household debt and securitization, and the connection between government deficit spending and financial and economic macro-stability. 
  • Bolton, C., Machová, V., Kovacova, M., & Valaskova, K. (2018). The power of human-machine collaboration: Artificial intelligence, business automation, and the smart economy. Economics, Management, and Financial Markets13(4), 51-56.
    • This article reviews and advances existing literature concerning the power of human–machine collaboration. Using and replicating data from Accenture, BBC, CellStrat, eMarketer, Frontier Economics, MIT Research Report, Morar Consulting, PwC, and Squiz, the authors perform analyses and makes estimates regarding the impact of artificial intelligence (AI) on industry growth including: real annual GVA growth by 2035 (%), how AI could change the job market: estimated net job creation by industry sector (2017–2037), reasons given by global companies for AI adoption, and leading advantages of AI for international organizations.
  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.*  
    • This book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. The book argues that sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.
  • Brynjolfsson, E. and McAfee, A. (2015). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W.W. Norton.*
    • This book identifies the best strategies for survival and offers a new path to prosperity in the midst of unprecedented technological and economic change. The authors’ suggestions include revamping education so that it prepares people for the next economy instead of the last one, designing new collaborations that pair brute processing power with human ingenuity, and embracing policies that make sense in a radically transformed landscape.
  • Brynjolfsson, E., Hui, X., & Liu, M. (2019). Does machine translation affect international trade? Evidence from a large digital platform. Management Science65(12), 5449-5460.
    • Using data from a digital platform, the authors study machine translation, finding that the introduction of a new machine translation system has significantly increased international trade on this platform, increasing exports by 10.9%. Furthermore, the study found that heterogeneous treatment effects are consistent with a substantial reduction in translation costs. The authors argue that the results of this study provide causal evidence that language barriers significantly hinder trade and that AI has already begun to improve economic efficiency in at least one domain.
  • Davis, J. B. (2005). Neoclassicism, artificial intelligence, and the marginalization of ethics. International Journal of Social Economics32(7), 590.
    • The paper examines the dependence of the positivist and welfarist preference satisfaction paradigm of neoclassical economics upon an implicit functionalist philosophy of mind. An important finding from this paper is that the preference satisfaction paradigm can be shown to be as suitable to artificial intelligence systems as to human beings.
  • Ernst, E., Merola, R., & Samaan, D. (2019). Economics of artificial intelligence: Implications for the future of work. IZA Journal of Labor Policy9(1), 7-72.
    • This paper discusses the rationales for fears of widespread job loss due to artificial intelligence, comparing this technology to previous waves of automation. The paper argues that large opportunities in terms of increases in productivity can ensue, including for developing countries, given the vastly reduced costs of capital that some applications have demonstrated and the potential for productivity increases, especially among the low skilled. To address the risk of increasing inequality, the paper calls for new forms of regulation for the digital economy.
  • Frey, C. B. (2019). The technology trap: Capital, labor, and power in the age of automation. Princeton University Press.
    • From the Industrial Revolution to the age of artificial intelligence, this book examines the history of technological progress and how it has radically shifted the distribution of economic and political power among society’s members. Just as the Industrial Revolution eventually brought about extraordinary benefits for society, this book argues that artificial intelligence systems have the potential to do the same. 
  • Korinek, A. (2019). The rise of artificially intelligent agents. University of Virginia.*
    • This paper develops an economic framework that describes humans and Artificially Intelligent Agents (AIA) symmetrically as goal-oriented entities that each (i) absorb scarce resources, (ii) supply their factor services to the economy, (iii) exhibit defined behavior and (iv) are subject to specified laws of motion. After introducing a resource allocation frontier that captures the distribution of resources between humans and machines, the paper describes several mechanisms that may provide AIAs with autonomous control over resources, both within and outside of our human system of property rights. The paper argues that in the limit case of an AIA-only economy, AIAs both produce and absorb large quantities of output without any role for humans, rejecting the fallacy that human demand is necessary to support economic activity.
  • Korinek, A. and Stiglitz, J. Artificial intelligence and its implications for income distribution and unemployment. (2019). In A. Agrawal, J. Gans, & A. Goldfarb, (Eds.), The economics of artificial intelligence, (pp. 349–390). NBER and University of Chicago Press.*
    • This paper provides an overview economic issues associated with artificial intelligence through discussing the general conditions under which these technologies may lead to a Pareto improvement, delineating the two main channels through which inequality is affected, and providing several simple economic models to describe how policy can counter these effects. Finally, the paper describes the two main channels through which technological progress may lead to technological unemployment and speculates on how technologies to create super-human levels of intelligence may affect inequality.
  • Lembcke, T. B., Engelbrecht, N., Brendel, A. B., & Kolbe, L. (2019). To nudge or not to nudge: Ethical considerations of digital nudging based on its behavioral economics roots. In ECIS Proceedings (pp. 1-17).
    • This article summarizes the ethical considerations raised in behaviour economics in light of digital contexts. Three important ethical considerations for digital nudges are discussed by the authors: (1) preserving individuals’ freedom of choice / autonomy, (2) transparent disclosure of nudges and (3) individual (pro-self) and societal (pro-social) goal-oriented justification of nudging.
  • Naidu, S., Rodrik, D., and Zucman, G. (2019). Economics for inclusive prosperity: An introduction. Economists for Inclusive Prosperity. http://www.econfip.org.*
    • This article argues that political institutions in the United States favor higher income individuals over lower income individuals and ethnic majorities over ethnic minorities and describes how this is accomplished through a myriad of policies which impact who votes, allow for differential influence and access by the wealthy, structure voting districts to dilute the impacts of under-represented voters, and allow for oversized influence of pro-business owner ideas through media and membership organizations.
  • Sen, A. (1987). On ethics and economics. Blackwell Publishing.*
    • This book argues that welfare economics can be enriched by paying more explicit attention to ethics, and that modern ethical studies can also benefit from a closer contact with economies. It argues further that even predictive and descriptive economics can be helped by making more room for welfare-economic considerations in the explanation of behaviour.
  • Sion, G. (2018). How artificial intelligence is transforming the economy: Will cognitively enhanced machines decrease and eliminate tasks from human workers through automation? Journal of Self-Governance and Management Economics6(4), 31-36.
    • This article builds its argument by drawing on data collected from AI Index, BMI Research, The Boston Consulting Group, Indeed, MIT Sloan Management Review, PwC, and Tractica. The article analyses and makes estimates regarding percentage who expect artificial intelligence will affect the workforce in the next five years, disruptive technologies in the healthcare industry, cumulative AI software revenue, top 10 use cases, world markets (2016–2025, $ millions), percentage who said the following technologies would have a high impact of disruption on their sector in 10 years, and share of jobs requiring AI skills.
  • Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf.*
    • This book discusses Artificial Intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.

Chapter 26. Fairness Criteria through the Lens of Directed Acyclic Graphs: A Statistical Modeling Perspective (Benjamin R. Baer, Daniel E. Gilbert, and Martin T. Wells)⬆︎

  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals: And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing*
    • This investigation by Pro-Publica revealed that risk scores for reoffending created by artificial intelligence algorithms and used in bail decisions in the United States are often unreliable and inaccurate. The investigation further found that these scores disproportionately find Black Americans to be higher risk, alleging that the algorithms used to produce the scores are racially biased.
  • Baeza-Yates, R., & Goel, S. (2019). Designing equitable algorithms for the web. In Companion Proceedings of The 2019 World Wide Web Conference (pp. 1296-1296).
    • This paper provides an introduction to fair machine learning, beginning with a general overview of algorithmic fairness, and then discussing these issues specifically in the context of the Web. To illustrate the complications of current definitions of fairness, the article relies on a variety of classical and modern ideas from statistics, economics, and legal theory. The article discusses the equity of machine learning algorithms in the specific context of the Web, exposing different sources for bias and how they impact fairness, include not only data bias, but also biases that are produced by data sampling, the algorithms per-se, user interaction and feedback loops that result from user personalization and content creation.
  • Bareinboim, E., Tian, J., & Pearl, J. (2014). Recovering from selection bias in causal and statistical inference. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (pp. 2410-2416).
    • This paper provides complete graphical and algorithmic conditions for recovering conditional probabilities from selection biased data. The paper also provides graphical conditions for recoverability when unbiased data is available over a subset of the variables. Finally, the paper provides a graphical condition that generalizes the backdoor criterion and serves to recover causal effects when the data is collected under preferential selection.
  • Barocas, S., Hardt, M., & Narayanan, A. (2018). Fairness and machine learning. http://www.fairmlbook.org*
    • This online textbook reviews the practice of machine learning, highlighting ethical challenges and presenting approaches to mitigate them. Specifically, the book focuses on the issue of fairness considering both technical interventions and deeper questions concerning power and accountability in machine learning.
  • Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., & Nagar, S. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943.
    • This paper introduces a new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license. The main objectives of this toolkit are to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms. 
  • Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5(2), 153–163. https://doi.org/10.1089/big.2016.0047*
    • This paper discusses a fairness criterion originating in the field of educational and psychological testing that has recently been applied to assess the fairness of recidivism prediction instruments. The authors demonstrate how adherence to the criterion may lead to considerable disparate impact when recidivism prevalence differs across groups.
  • Corbett-Davies, S., and Goel, S. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv:1808.00023.*
    • This paper argues that that three prominent definitions of fairness used in machine learning, anti-classification, classification parity, and calibration, each have significant statistical issues. In contrast to these strategies, the authors argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. 
  • Dwork, C., Ilvento, C., Rothblum, G. N., & Sur, P. (2020). Abstracting fairness: Oracles, metrics, and interpretability. arXiv preprint arXiv:2004.01840.
    • This paper examines what can be learned from a fairness oracle equipped with an underlying understanding of “true” fairness. The results have implications for interpretability—a highly desired but poorly defined property of classification systems that endeavors to permit a human arbiter to reject classifiers deemed to be “unfair” or illegitimately derived.
  • Flores, A. W., Bechtel, K., & Lowenkamp, C. T. (2016). False positives, false negatives, and false analyses: A rejoinder to “Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks”. Federal Probation80(2), 38-46. https://heinonline.org/HOL/P?h=hein.journals/fedpro80&i=116*
    • This article argues that a ProPublica report exposing racial bias in COMPAS, a risk assessment tool used in the criminal justice system, was based on faulty statistics and data analysis. The authors provide their own analysis of the data used in the ProPublica piece to argue that the COMPAS tool is not racially biased.
  • Hardt, M., Price, E., Srebro, N., et al. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems 29 (pp. 3315–3323).*
    • This article proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, this paper shows how to optimally adjust any learned predictor so as to remove discrimination according to the authors’ definition. The authors argue that this framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy.
  • Herington, J. (2020). Measuring fairness in an unfair world. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 286-292).
    • This paper, argues that the three most popular families of measures – unconditional independence, target-conditional independence and classification-conditional independence – make assumptions that are unsustainable in the context of an unjust world. The paper argues that implicit idealizations in these measures fall apart in the context of historical injustice, ongoing unmodeled oppression, and the permissibility of using sensitive attributes to rectify injustice. The paper puts forward an alternative framework for measuring fairness in the context of existing injustice: distributive fairness.
  • Holmes, N. (2003). Artificial intelligence: arrogance or ignorance? Computer, 36(11), 120-119. https://doi.org/10.1109/MC.2003.1244544
    • This paper argues for the term “algoristics” as a highly suitable replacement for artificial intelligence, arguing that it is more historically correct. The author argues that placing this renamed field alongside statistics and logistics, as a branch of mathematics, would benefit the computing profession greatly.
  • Kilbertus, N., et al. (2017). Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems, 30, 656–666.*
    • Going beyond observational criteria, this article frames the problem of discrimination based on protected attributes in the language of causal reasoning. Through the lens of causality, this article articulates why and when observational criteria fail, exposes previously ignored subtleties and why they are fundamental to the problem, and puts forward natural causal non-discrimination criteria and develop algorithms that satisfy them.
  • Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807.
    • This paper formalizes three fairness conditions that lie at the heart of recent debates, and argues that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. The paper’s results suggest some of the ways in which key notions of fairness are incompatible with each other and provide a framework for thinking about the trade-offs between them.
  • Liu, L. T., Dean, S., Rolf, E., Simchowitz, M., & Hardt, M. (2018). Delayed impact of fair machine learning. arXiv preprint arXiv:1803.04383.
    • This article presents a study of how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. The results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.
  • Mitchell, S., Potash, E., & Barocas, S. (2018). Prediction-based decisions and fairness: A catalogue of choices, assumptions, and definitions. arXiv:1811.07867.
    • This paper explicates the various choices and assumptions made—often implicitly—to justify the use of prediction-based decisions. The paper demonstrates how such choices and assumptions can raise concerns about fairness and presents a notationally consistent catalogue of fairness definitions from the ML literature. The paper offers a concise reference for thinking through the choices, assumptions, and fairness considerations of prediction-based decision systems.
  • Overdorf, R., Kulynych, B., Balsa, E., Troncoso, C., & Gürses, S. (2018). Questioning the assumptions behind fairness solutions. arXiv preprint arXiv:1811.11293.
    • This paper revisits assumptions made about the service providers in fairness solutions. Namely, that service providers have (i) the incentives or (ii) the means to mitigate optimization externalities. Moreover, the paper argues that the environmental impact of these systems suggests that we need (iii) novel frameworks that consider systems other than algorithmic decision-making and recommender systems, and (iv) solutions that go beyond removing related algorithmic biases. Going forward, the authors propose Protective Optimization Technologies that enable optimization subjects to defend against negative consequences of optimization systems.
  • Pleiss, Geoff, et al. (2017). On fairness and calibration. In Advances in Neural Information Processing Systems 30, 5680–5689.
    • This paper investigates the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. The authors show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These findings, which extend and generalize existing results, are empirically confirmed on several datasets.
  • Rzepka, R., & Araki, K. (2005). What statistics could do for ethics? The idea of common sense processing based safety valve. In AAAI Fall Symposium on Machine Ethics, Technical Report FS-05-06 85-87.
    • This paper introduces an approach to the ethical issue of machine intelligence developed through experiments with automatic common-sense retrieval and affective computing for open-domain talking systems. The authors use automatic common-sense knowledge retrieval which allows to calculate the common consequences of actions and average emotional load of those consequences.
  • Zhang, J., & Bareinboim, E. (2018). Fairness in decision-making—the causal explanation formula. In Thirty-Second AAAI Conference on Artificial Intelligence.
    • This paper introduces three new fine-grained measures of transmission of change from stimulus to effect, which we called counterfactual direct (Ctf-DE), indirect (Ctf-IE), and spurious (Ctf-SE) effects. The authors apply these measures to various discrimination analysis tasks and run extensive simulations, including detection, evaluation, and optimization of decision-making under fairness constraints. The paper concludes by studying the trade-off between different types of fairness criteria (outcome and procedural) and provides a quantitative approach to policy implementation and the design of fair AI systems.

Chapter 27. Automating Origination: Perspectives from the Humanities (Avery Slater)⬆︎

  • Andersson, A. E. (2009). Economics of creativity. In Karlsson, C., Cheshire, P. & Andersson, A. E. (Eds.), New directions in regional economic development (pp. 79-95). Springer.
    • This paper explores the past effects of the division of labor system as posited by Adam Smith and the recent rise in creativity that goes against this system. The author argues that as specialization progressed, people were confined to a few very simple operations and this should have limited creativity. However, in recent times there has been a growth in creative industries such as research and development, scientific research, and the arts.
  • Ariza, C. (2009). The interrogator as critic: The Turing test and the evaluation of generative music systems. Computer Music Journal, 33(2), 48-70.
    • This article explores the relationship between algorithmically generated music systems and the human ability to detect their generated nature. The authors argue that listening tests to detect this distinction do not constitute true Turing Tests.
  • Boden, M. A. (1990). The creative mind: Myths and mechanisms. Weidenfeld. Abacus & Basic Books.*
    • This book explores human creativity and presents a scientific framework for understanding how creativity arose and how it is defined.
  • Boden, M. (Ed.). (1994). Dimensions of Creativity. M.I.T. Press.*
    • In this book, the authors explore how creative ideas arise, and whether creativity can be objectively defined and measured.
  • Cardoso, A., & Bento, C. (Eds.). (2006). Computational creativity [Special issue]. Journal of Knowledge-Based Systems, 19(7).*
    • This special issue is focused on characterizing and establishing computational models of creativity. The papers encompass four topics: models of creativity, analogy and metaphor in creative systems, multiagent systems and formal approaches to creativity.
  • Clancey, W. J. (1997). Situated cognition: On human knowledge and computer representations. Cambridge University Press.
    • This book explores and explains the new ‘situated cognition’ movement in cognitive science. This is a new metaphysics of mind; a dynamical-systems-based, ecologically oriented model of the mind. Researchers suggest that a full understanding of the mind will require systematic study of the dynamics of interaction among mind, body, and world.
  • Colton, S., & Wiggins, G. A. (2012). Computational creativity: The final frontier? Ecai, 12, 21-26. https://doi.org/10.3233/978-1-61499-098-7-21
    • This paper argues Computational Creativity constitutes a frontier for AI research beyond all others. The authors do so through an exploration of the field of computational creativity via a working definition; a brief history of seminal work; an exploration of the main issues, technologies and ideas; and a look towards future directions.
  • Dodgson, M., Gann, D., & Salter, A. J. (2005). Think, play, do: Technology, innovation, and organization. Oxford University Press.
    • In this book, the authors argue that the innovation process is changing profoundly, partly due to innovation technologies. In response, the authors propose a new schema for the innovation process: Think, Play, Do.
  • Edwards, S. M. (2001). The technology paradox: Efficiency versus creativity. Creativity Research Journal, 13(2), 221-228.
    • This article aims to highlight the impact of technology on the ability of individuals to be creative within society. First, the authors review barriers that individuals must overcome to function creatively in the information age, along with the process by which creativity occurs. These factors are then presented alongside the consequences of technological and computational development. Finally, the authors offer suggestions on the coexistence of creativity and technology in the future.
  • Jordanous, A. (2012). A standardised procedure for evaluating creative systems: Computational creativity evaluation based on what it is to be creative. Cognitive Computation, 4(3), 246-279.
    • In this paper, the authors aim to address the issue of defining what it means for a computer to be creative; given that there is no consensus on this for human creativity, its computational equivalent is equally nebulous. Thus, this paper proposes a Standardised Procedure for Evaluating Creative Systems (SPECS) to measure and define computational creativity. SPECS methodology is then demonstrated through a comparative case study evaluating computational creativity systems that improvise music.
  • Langley, P., Simon, H., Bradshaw, G. L., and Zytkow, J. (eds.) (1986). Scientific discovery: Computational explorations of the creative process. MIT Press.*
    • Scientific Discovery examines the nature of scientific research and reviews the arguments for and against a normative theory of discovery. This examination is done in the context of a series of artificial-intelligence programs developed by the authors that can simulate the human thought processes used to discover scientific laws.
  • McCorduck, P. (1991). Aaron’s code: Meta-art, artificial intelligence, and the work of Harold Cohen. W.H. Freeman and Company.*
    • This book examines the connection between art and computer technology. This is done through an exploration of the work of the artist Harold Cohen, who created an elaborate computer program that makes drawings autonomously, without human intervention.
  • Montal, T., & Reich, Z. (2017). I, robot. You, journalist. Who is the author? Authorship, bylines and full disclosure in automated journalism. Digital journalism, 5(7), 829-849.
    • This paper explores the increasing reliance on algorithms to generate news automatically, particularly in the form of algorithmic authorship. The use of this technology has potential psychological, legal and occupational implications for news organizations, journalists and their audiences. The authors argue for a consistent and comprehensive crediting policy that sponsors public interest in automated news.
  • Norman, D. (2014). Things that make us smart: Defending human attributes in the age of the machine. Diversion Books.
    • In this book, Norman argues in favor of a person-centered redesign of the machines that surround our lives. The book explores the complex interaction between human thought and the technology it creates. The author argues that the machines we create begin to shape how we think and, at times, even what we value, and thus argues in favor of redevelopment of machines that fit our minds, rather than minds that must conform to the machine.
  • Partridge, D., and Rowe, J (1994). Computers and creativity. Intellect Books.*
    • Through a computational modelling perspective, this book examines theories and models of the creative process in humans. This is done through an exploration of both input creativity – the analytic interpretation of input information, and output creativity – the artistic, synthetic process of generating novel innovations.
  • Paul, E. S and Scott, B. K. (Eds.) (2014). The philosophy of creativity: New essays. Oxford University Press.*
    • In this book, the authors argue that creativity should be explored in connection to, and in context of, philosophy. The aim is to illustrate the value of interdisciplinary exchange and explore issues such as the role of consciousness in the creative process, whether great works of literature give us insight into human nature, whether a computer program can really be creative, and the definition of creativity.
  • Schmidhuber, J. (1997). Low-complexity art. Leonardo, Journal of the International Society for the Arts, Sciences, and Technology, 30(2), 97-103.*
    • This article relates and explores the relation between the depiction of the general essence of objects; viewed as the computer-age equivalent of minimal art to informal notions such as “good artistic style” and “beauty.” In an attempt to formalize certain aspects of depicting the essence of objects, the author proposes and analyses this art form they refer to as low-complexity art.
  • Sternberg, R. J., & Lubart, T. I. (1995). Defying the crowd: Cultivating creativity in a culture of conformity. Free Press.
    • This book examines how institutions as business and education often impede the creative process and how the creative person typically finds ways to subvert those institutions to promote his or her ideas. Furthermore, by presenting a theory as to how institutions can learn to foster creativity. Sternberg explores how persons can learn to become more creative.
  • Varshney, L. R., Pinel, F., Varshney, K. R., Schörgendorfer, A., & Chee, Y. M. (2013). Cognition as a part of computational creativity. In IEEE 12th International Conference on Cognitive Informatics and Cognitive Computing (pp. 36-43).
    • This paper examines the relationship between two distinct fields that have developed in a parallel fashion: computational creativity and cognitive computing. The authors then argue that concluding that the two fields overlap in one precise way: the evaluation or assessment of artifacts with respect to creativity.
  • Veale, A., Gervás, P., and Pease, A. (2006). Computational creativity [special issue]. New Generation Computing, 24(3).*
    • A pure definition of creativity – “pure”, at least, in the sense of being metaphor-free and grounded in objective fact, presents as an elusive phenomenon to study, made all the more vexing by our fundamental inability to pin it down in formal terms. In this special issue, the contributing authors present their respective definitions of creativity.
  • Wiggins, G. A. (2006). A preliminary framework for description, analysis and comparison of creative systems. Journal of Knowledge Based Systems, 19(7), 449-58.*
    • This article summarizes and explores concepts presented in and arising from Margaret Boden’s (1990) descriptive hierarchy of creativity. By formalizing the ideas Boden proposes, the author argues that Boden’s framework is more uniform and more powerful than it first appears. Finally, the paper explores potential routes to achieve a model which allows detailed comparison, and hence better understanding, of systems which exhibit behavior which would be called ‘‘creative’’ in humans.

Chapter 28. Perspectives on Ethics of AI: Philosophy (David J. Gunkel)⬆︎

  • Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge University Press.*
    • Machine Ethics is a collection of essays by philosophers and artificial intelligence researchers that focus on the new field of machine ethics, which is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.  These essays explain aspects about adding an ethical dimension to machines that function autonomously including why it is necessary to do so; what is required; and various approaches that could be considered. 
  • Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence (pp. 316-334). Cambridge University Press.
    • This chapter in the Cambridge Handbook of Artificial Intelligence surveys some of the ethical challenges that may arise out of the creation of thinking machines.  The ethical questions raised need to ensure that such machines do not harm humans and other morally relevant beings and need to address the moral status of the machines themselves.  Topics discussed include: issues arising in the near future of AI; challenges for ensuring that AI operate safely as it approaches humans in its intelligence; how to assess whether and in what circumstances AI’s themselves have moral status; consideration of how AIs might differ from humans when assessing them ethically; and issues of creating AIs more intelligent than human and ensuring that they use their advanced intelligence for good rather than ill. 
  • Brooks, R. A. (2003). Flesh and machines: How robots will change us. Vintage.
    • Rodney A. Brooks, director of the MIT Artificial Intelligence Laboratory, believes researchers are close to creating robots that can think, feel, repair themselves and even reproduce.  In this book, Brooks outlines the history of robots, investigates the ever-changing relationships between humans and robots, and explores the growing role that robots will play in human society. 
  • Coeckelbergh, M. (2012). Growing moral relations: Critique of moral status ascription. Palgrave Macmillan.*
    • Growing Moral Relations offers an original philosophical approach to the issue of new scientific and technological developments that challenge us to reconsider our moral world order. This book makes a distinctive contribution to the development of a relational approach to moral status by re-defining the problem in a social and phenomenological way.
  • Dennett, D. C. (2017). Brainstorms: Philosophical essays on mind and psychology. MIT Press.*
    • Brainstorms is a collection of essays within the interdisciplinary field of cognitive science. Dennett offers a comprehensive theory of mind, encompassing traditional issues of consciousness and free will. Using careful arguments and ingenious thought experiments, Dennett exposes familiar preconceptions and hobbling intuitions.
  • Feenberg, A. (1991). Critical theory of technology. Oxford University Press.
    • The theme of this book is summarized in the first line of the preface: “Must human beings submit to the harsh logic of machinery, or can technology be redesigned to better serve its creators?” The work represents democratic socialist philosophy, with frequent allusions to such authors as Habermas, Foucault, Lukacs, Marcuse, Hegel, and Marx. It surveys such concepts as alienation, ambivalence, instrumentalization, civilization change, capitalist hegemony, workers’ control, and other aspects of critical theory.
  • Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. MIT Press.*
    • The Machine Question is an investigation into the assignment of moral responsibilities and rights to intelligent and autonomous machines of our own making. This book takes up the “machine question”: whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration.
  • Hall, J. S. (2001).  Ethics for machines. KurzweilAI.net. http://www.kurzweilai.net/ethics-for-machines.
    • What are the ethical responsibilities of an intelligent being toward another one of a lower order? And who will be lower—us or machines? Nanotechnologist J. Storrs Hall considers our moral duties to machines, and theirs to us.  He asks and answers the following questions:  What are machines, anyway?  Why do machines need ethics?  What is ethics, anyway? What are the normative implications of applying different ethical theories? And, what is the road ahead?
  • Heidegger, M. (1977). (W. Lovitt Trans.). The question concerning technology. Harper & Row.
    • The Question Concerning Technology is Heidegger’s analysis of technology. Specifically, Heidegger discusses the essence of technology and the relationship between the human being (Dasein) and technology. He investigates the question of how we generally think about technology and offers two answers: technology is a means to an end and technology is a human activity.
  • Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology8(4), 195-204.
    • This essay discusses the distinction between artifacts and natural entities, and the distinction between artifacts and technology. The conditions of the traditional account of moral agency are also identified. While computer systems do not have mental states, because they are intentionally created and deployed, they are components in human moral action.  Therefore, three components – artifact designer, artifact, and artifact user – are at work when there is an action and all three should be the focus of moral evaluation.
  • Lin, P., Abney, K., & Bekey, G. A. (Eds.). (2012). Robot ethics: The ethical and social implications of robotics. MIT Press.*
    • In Robot Ethics, prominent experts from science and the humanities explore issues and questions in robot ethics that range from sex to war such as: Should robots be programmed to follow a code of ethics, if this is even possible? Are there risks in forming emotional bonds with robots? How might society—and ethics—change with robotics?
  • Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press.*
    • The Media Equation presents the results of numerous psychological studies that have led to the conclusion that people treat computers, TV and new media as real people and places. One of the  conclusions of these studies is that the human brain has not evolved quickly enough to assimilate 20th-century technology. This book details how this knowledge can help us better design and evaluate media technologies, including computer and Internet software, TV entertainment, news, and advertising, and multi-media.
  • Searle, J. R. (1984). Minds, brains, and science. Harvard University Press.*
    • Minds, Brains and Science takes up the problem of how to reconcile common sense and science. Searle argues that the truths of common sense and the truths of science are both right and that the only question is how to fit them together. Searle explains how we can reconcile an intuitive view of ourselves as conscious, free, rational agents with a universe that science tells us consists of mindless physical particles.
  • Scalable Cooperation at MIT Media Lab. (n.d.). Moral machine. http://moralmachine.mit.edu/
    • Moral Machine is a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. It shows the user moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, the user judges which outcome he/she thinks is more acceptable. The user can then see how his/her responses compare with those of other people.
  • Turner, J. (2018). Robot rules: Regulating artificial intelligence. Springer.*
    • Robot Rules argues that AI is unlike any other previous technology, owing to its ability to take decisions independently and unpredictably. This gives rise to three issues:  responsibility—who is liable if AI causes harm; rights—the disputed moral and pragmatic grounds for granting Ai legal personality; and the ethics surrounding the decision-making of AI. The book suggests that in order to address these questions we need to develop new institutions and regulations on a cross-industry and international level. 
  • Tzafestas, S. G. (2016). RoboethicsA navigating overview. Springer.*
    • Roboethics explores the ethical questions that arise in the development, creation and use of robots that are capable of semiautonomous or autonomous decision making and human-like action. This book examines how ethical and moral theories can and must be applied to address the complex and critical issues of the application of these intelligent robots in society (such as medical, assistive, socialized and war robots). It provides a thorough investigation into the moral responsibility (if any) of autonomous robots when doing harm.  
  • University of Oxford Podcasts. (n.d.) Ethics in AI.   https://podcasts.ox.ac.uk/series/ethics-ai
    • University of Oxford’s Institute for AI Ethics explores ethical questions in AI in an interdisciplinary way. Ethical questions discussed concern privacy, information security, appropriate rules of automated behavior, algorithmic bias, transparency and wider impacts of AI threats on society including massive disruptions of employment, transport, role of big data in life-changing decisions such as financial, legal or medical, etc.  
  • Walch, K.  (2019). Ethical concerns of AI. FORBES.  https://www.forbes.com/sites/cognitiveworld/2020/12/29/ethical-concerns-of-ai/#719360d223a8
    • Ethical Concerns of AI discusses the shift from thinking purely about the functional capabilities of AI to the ethics behind creating such powerful and potentially life-consequential technologies.  Topics discussed include: whether AI will replace human workers; whether AI will make the rise of fake media and disinformation worse; whether we want evil people to have easy access to AI technology; whether AI is our new Big Brother; whether intelligent machines will have rights; and the need to create transparency in AI decision-making.   
  • Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press.*
    • Moral Machines discusses the need for the artificial agents such as robots and software bots that are increasingly populating the human built environment to become capable of factoring ethical and moral considerations into their decision making due to their capabilities of acting autonomously. Development of artificial moral agents will require engineers to explore design strategies for systems that are sensitive to moral considerations. The design choices and actions of engineers will need to determine what role ethical theory should play in defining control architectures for such systems.
  • Wallach, W., & Asaro, P. (Eds.). (2017). Machine ethics and robot ethics. Routledge.*
    • Machine Ethics and Robot Ethics addresses the ethical challenges posed by the rapid development of and widespread use in everyday life of advancing technologies such as artificial intelligence, robotics and machine learning. This book is a collection of essays that focus on the control and governance of computational systems; the exploration of ethical and moral theories using software and robots as laboratories or simulations; the inquiry into the necessary requirements for moral agency and the basis and boundaries of rights; and questions of how best to design systems that are both useful and morally sound. Collectively the essays ask what the practical ethical and legal issues, arising from the development of robots, will be over the next twenty years and how best to address these future considerations.
  • Wiener, N. (1988). The human use of human beings: Cybernetics and society (No. 320). Da Capo Press.*
    • The Human Use of Human Beings examines the implications of cybernetics, the study of the relationship between computers and the human nervous system, for education, law, language, science, and technology. This book outlines Wiener’s complex vision which involved scenarios where machines would release people from relentless and repetitive drudgery in order to achieve more creative pursuits. It also outlined his realization of the danger of dehumanizing and displacement posed by his vision.

Chapter 29. The Complexity of Otherness: Anthropological Contributions to Robots and AI (Kathleen Richardson)⬆︎

  • Appadurai, A. (1986). Introduction: Commodities and the politics of value. In A. Appadurai (Ed.), The social life of things: Commodities in cultural perspective. Cambridge University Press.
    • This book chapter argues that anthropologists should study ‘things’: instead of assuming that humans assign significance to things, anthropologists should consider how things take shape, acquire value, and move through space. The movement of things and commodities across different contexts sheds light on the social context they inhabit. 
  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.
    • Automation has the potential to deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. This book presents the concept of the “New Jim Code:” a range of discriminatory designs that encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. This book makes a case for race as itself as a kind of technology, designed to sanctify social injustice in the architecture of everyday life.
  • Boellstorff, T. (2008). Coming of age in second life: An anthropologist explores the virtually human. Princeton University Press.
    • One of the most famous digital ethnographies, this book shows how virtual worlds can change ideas about identity and society. Based on two years of fieldwork in Second Life, living among and observing its resident just as anthropologists have traditionally done to learn about cultures in the real world, this ethnography shows how anthropological methods can be applied to virtual sociality.
  • Cave, S. (2019). Intelligence as ideology: Its history and future [Keynote Lecture]. Centre for Science and Policy Annual Conference. http://www.csap.cam.ac.uk/media/uploads/files/1/csap-conference-2019-stephen-cave-presentation.pdf
    • This keynote lecture problematizes the concept of intelligence, showing how it is not only impossibly to reliably measure but also – as the measurement of what it means to be human – became associated with evolutionary paradigms, colonial rule, and the ‘survival of the fittest.’ Intelligence importantly works to justify elite domination over others: the poor, women, people with disabilities, and so on.
  • Dourish, P. (2016). Algorithms and their others: Algorithmic culture in context. Big Data & Society, 3(2). https://doi.org/10.1177%2F2053951716665128
    • Using Niklaus Wirth’s 1975 formulation that “algorithms + data structures = programs” as a launching-off point, this paper examines how an algorithmic lens shapes the way in which we might inquire into contemporary digital culture. Algorithms, once obscure objects of technical art, are integral to artificial intelligence today. This paper explores what it means to adopt the algorithm as an object of analytic attention, showing what it shows and reveals.
  • Forsythe, D. (2002). Studying those who study us: An anthropologist in the world of artificial intelligence. Stanford University Press.
    • This essay collection book presents an anthropological study of artificial intelligence and informatics, asking how expert systems designers imagine users and in turn, how humans interact with computers. It analyzes the laboratory as a fictive kin group that reproduces gender asymmetries, offering a reflexive ethnographic perspective on the cultural mechanisms that support the persistent male domination of engineering.
  • Geertz, C. (1973). Thick description: Toward an interpretative theory of culture. In The interpretation of cultures: Selected essays (pp. 3–32). Basic Books.
    • This essay articulates the central method of interpretative anthropology, explaining how ethnographers write and think about cultural situations. Contrasting ‘thick’ description – which includes cultural background and layered meanings – with ‘thin’ or merely factual accounts, Geertz shows how ethnographers bring in context to explain how behavior becomes meaningful.
  • Haraway, D. (1991). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, cyborgs and women: The reinvention of nature (pp. 149–181). Routledge.
    • This essay articulates a feminist theory of the cyborg: a half human half machine hybrid. The figure of the cyborg dissolves the boundaries between nature and artifice, animal and human, and physical and non-physical – Haraway takes this up as an opportunity for feminists to think beyond the duality of identity politics and form new political alliances.
  • Helmreich, S. (2000). Silicon second nature: Culturing artificial life in a digital world. University of California Press.
    • This book presents an ethnographic study of the people and programs connected with an unusual hybrid of computer science and biology. Through detailed dissections of artifacts in the context of artificial life research, Helmreich shows that the scientists working on this see themselves as masculine gods of their cyberspace creations, bringing longstanding mythological and religious tropes concerning gender, kinship, and race into their research domain.
  • Hicks, M. (2017). Programmed inequality: How Britain discarded women technologists and lost its edge in computing. MIT Press.
    • Drawing on government files, personal interviews, and the archives of major British computer companies, this book exposes the myth of technical meritocracy by tracing how computer labor was masculinized between the 1940s and today. Women were central to the growth of high technology from World War II to the 1960s, when computing experienced a gender flip – this development caused a labor shortage and severely impeded both the growth of British computer industry and the success of the nation as a whole.
  • Kelty, C. (2008). Geeks, social imaginaries, and recursive publics. Cultural Anthropology, 20(2), 185–214.
    • Based on fieldwork conducted in three countries, this article argues that the mode of association specific to “geeks” (hackers, lawyers, activists, and IT entrepreneurs) on the Internet is that of a “recursive public sphere” that is constituted by a shared imaginary of the technical and legal conditions of possibility for their own association. Geeks imagine their social existence and relations as much through technical practices (hacking, networking, and code writing) as through discursive argument (rights, identities, and relations), rendering the “right to tinker” with software a form of free speech.
  • Latour, B. (1993). We have never been modern (C. Porter, Trans.). Harvard University Press.
    • This philosophical text defines modernity in terms of the separation between nature and society, human and thing, reality and artifice. Latour shows that this separation is theoretically powerful in science but does not play out in practice: an anthropological look at scientific practice reveals that everything is always already hybrid – reality and artifice cannot be separated. This book argues that the hybridity of nature and culture is central to the success of technoscientific practices.
  • Miller, D., & Horst, H. (2012). The digital and the human: A prospectus for digital anthropology. In H. Horst & D. Miller (Eds.), Digital Anthropology (pp. 3–38). Bloomsbury Publishing.
    • This chapter articulates a vision for digital anthropology, defining anthropology as a discipline occupied with understanding what it is to be human and how humanity manifests differently across cultures, and the digital as everything that can be reduced to binary code. Miller and Horst argue for ethnographic work that emphasizes the continuity between the digital and the non-digital, the materiality of the digital, and the ultimately deeply local cultural ways in which technologies are received.
  • Noble, S. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
    • This book uses algorithmic search engines to show how data discrimination works. The combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color and especially Black women.
  • Richardson, K. (2015). An anthropology of robots and AI: Annihilation anxiety and machines. Routledge.
    • This ethnography of robot-making in labs at the Massachusetts Institute of Technology (MIT) examines the cultural ideas that go into the making of robots, and the role of fiction in co-constructing the technological practices of the robotic scientists. The book charts the move away from the “worker” robot of the 1920s to the “social” one of the 2000s, using anthropological theories to describe how robots are reimagined as companions, friends and therapeutic agents.
  • Robertson, J. (2017). Robo sapiens Japanicus: Robots, gender, family, and the Japanese nation. University of California Press.
    • An ethnography and sociocultural history of governmental and academic discourse of human-robot relations in Japan, this book explores how actual robots – humanoids, androids, and animaloids – are “imagineered” in ways that reinforce the conventional sex/gender system, the political-economic status quo, and a conception of the “normal” body. Asking whether “civil rights” should be granted to robots, Robertson interrogates the notion of human exceptionalism.
  • Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2). https://doi.org/10.1177%2F2053951717738104
    • This article articulates how algorithms might be approached ethnographically: as heterogeneous and diffuse sociotechnical systems, rather than rigidly constrained and procedural formulas. This involves thinking of algorithms not “in” culture, but “as” culture: part of broad patterns of meaning and practice that can be engaged with empirically. Practical tactics for the ethnographer then do not depend on pinning down a singular “algorithm” or achieving “access,” but rather work from the partial and mobile position of an outsider.
  • Suchman, L. (2007). Human-machine reconfigurations: Plans and situated actions. Cambridge University Press.
    • This book shows that debates over the status of human-like machines – whether they are ‘alive’ or not, different from the human or not – are improved when the question shifts to how humans and machines are enacted as similar or different in practice, and with what consequences. Calling for a move away from essentialist divides, this book argues for research aimed at tracing the differences within specific sociomaterial arrangements.

Chapter 30. Calculative Composition: The Ethics of Automating Design (Shannon Mattern)⬆︎

  • Bratton, B. (2015). Lecture on A.I. and cities: Platform design, algorithmic perception, and urban geopolitics. Benno Premsela Lecture Series. https://bennopremselalezing2015.hetnieuweinstituut.nl/en/lecture-ai-and-cities-platform-design-algorithmic-perception-and-urban-geopolitics.*
    • Bratton argues that the project of creating a smart city will be futile in its attempt to create futuristic living conditions for humans, but instead will become habitats for future insects. This thesis was predicted in part because of the example of the failed Sanzhi Pod City in Taipei, which was overtaken by several species of orchid mantis.
  • Carpo, M. (2017). The second digital turn: Design beyond intelligence. MIT Press. *
    • In this book, Carpo argues that tools from the first digital turn in architecture that promoted significant development in style, such as the use of curving lines and surfaces, have now promoted a second digital turn that impacts the way designers develop ideas. Machine learning has been employed to create extremely complex design that humans could not think of themselves.
  • Carta, S. (2019). Big data, code and the discrete city: Shaping public realms. Routledge.
    • This book provides an overview on the impact of digital technologies on public space, and actors involved in designing public space, policymakers, and individual citizens.
  • de Waal, M., & Dignum, M. (2017). The citizen in the smart city. How the smart city could transform citizenship. it-Information Technology59(6), 263-273. https://doi.org/10.1515/itit-2017-0012
    • This article examines the relationship between smart cities and citizenship, introducing three potential smart city visions. First, The Control room is a city with a collection of infrastructure and services. Second, the Creative City is focused on local and regional innovations. Third, the Smart Citizens city deals with the potential of a smart city which has an active political and civil community.
  • Foth, M. (2017). The next urban paradigm: Cohabitation in the smart city. IT-Information Technology59(6), 259-262. https://doi.org/10.1515/itit-2017-0034
    • This introductory article provides an overview of the special issue of IT-Information Technology on Urban Informatics and Smart Cities.
  • Gunkel, D. J. (2018). Hacking cyberspace. Routledge.
    • Gunkel argues that metaphors used to describe new technologies actually inform how those technologies are created. Gunkel develops a view that considers how designers employ discourse in their technological development.
  • Hebron, P. (2017, April 26). Rethinking design tools in the age ofmachine learning. Medium. https://medium.com/artists-and-machine-intelligence/rethinking-design-tools-in-the-age-of-machine-learning-369f3f07ab6c *
    • Hebron examines the widespread availability of technological creative tools that allow an individual to create on a computer or mobile phone. He argues that machine learning should aim to make creative processes easier for human actors, but not do any creative work themselves, in order to preserve human originality.
  • Johnson, P. A., Robinson, P. J., & Philpot, S. (2020). Type, tweet, tap, and pass: How smart city technology is creating a transactional citizen. Government Information Quarterly37(1), 101414. https://doi.org/10.1016/j.giq.2019.101414
    • This article asks the question of whether or not the use of technology acts as a medium for a transactional relationship between governments and citizens. The authors highlight four models: type, tweet, tap and pass, using relevant literature and examples to flesh out the concept. They propose that governments consider the impact of a transactive relationship before they implement smart design technology. 
  • Luce, L. (2019).  Artificialintelligence for fashion: How AI is revolutionizing the fashion industry. Apress.*
    • This reference work provides a basic outline of how AI is employed in the fashion industry, highlighting key terms and concepts. It provides a guide for designers, managers, and executives on how AI is impacting the field of fashion. 
  • Mattern, S. (2017, Feburary). A city is not a computer.  Places Journal. https://placesjournal.org/article/a-city-is-not-a-computer/ *
    • In this article, Mattern critiques the totalizing idea of cities as computers employed by technology companies, arguing that this practice ignores the information provided by urban designers and scholars who have investigated how cities work for decades.
  • Mattern, S. (2018, April.). Databodies in codespace. Places Journal. https://placesjournal.org/article/databodies-in-codespace/. *
    • Mattern discusses the efforts of technology companies through efforts such as the Human Project to quantify the human condition. She criticizes this goal in light of methodological and ethical risks of allowing private companies access to the amount of personal data required by these projects.
  • Negroponte, N. (1973). The architecture machine: Toward a more human environment. MIT Press.*
    • This book provides a forward looking and optimistic account of what will occur when genuine human-machine dialogue is achieve, and man is able to work together with AI towards mutual goals. Negroponte uses systems theory philosophy to examine issues that can arise in these relationships.
  • O’Donnell, K. M. (2018, March 2). Embracing artificial intelligence in architecture. AIA. https://www.aia.org/articles/178511-embracing-artificial-intelligence-in-archit.*
    • O’Donnell argues that architects should learn about data and its application in order to work towards the incorporation of AI in their field, as development in this area will strengthen the profession.
  • Retsin, G. (2019). Discrete: Reappraising the digital in architecture. John Wiley & Sons.
    • This book discusses the impact of two decades of digital experimentation in architecture, arguing that the digital focus on syle and differentiation seems out of touch to a new generation of architects amid a global housing crisis. This book tracks a new body of work that uses digital tools to create discrete parts that can be used toward aims of open-ended and adaptable architecture.
  • Ridell, S. (2019). Mediated bodily routines as infrastructure in the algorhythmic city. Media Theory, 3(2), 27-62.
    • Ridell argues that there is a lack of development in the study of how bodies are mediated in the context of digital urban life. The article examines mediated bodily habits and routines, arguing that they are important to the infrastructure of a smart city.
  • Sand, K. (2019). The transformation of fashion practice through instagram. In International Conference on Fashion communication: between tradition and future digital developments (pp. 79-85). Springer.
    • This chapter uses a case study to investigate how social media platforms such as Instagram impact fashion practice, arguing that digital literacy skills are vital to success in the fashion industry.
  • Steenson, M. W. (2017). Architectural intelligence: How designers and architects created the digital landscape. MIT Press.*
    • This book provides a historical overview of overlap between the fields of architectural design and computer science.
  • Thomassey, S., & Zeng, X. (Eds.). (2018). Artificial intelligence for fashion industry in the big data era. Springer.
    • This book gives an overview of current issues in the fashion industry, such as the suitability of existing AI implementation. Each chapter gives an example of a data-driven AI application to all sectors of the fashion industry, including design, manufacturing, supply chains, and retail.
  • Vetrov, Y. (2017, January 3). Algorithmdriven design: How artificial intelligence is changing design. Smashing Magazine. https://www.smashingmagazine.com/2017/01/algorithm-driven-design-how-artificial-intelligence-changing-design/*
    • Vetrov argues that designers should utilize artificial intelligence in order to maximize capabilities and allow themselves to prioritize tasks with ease. To do this, she recommends that designers support more digital platforms.
  • Yigitcanlar, T., Kamruzzaman, M., Foth, M., Sabatini-Marques, J., da Costa, E., & Ioppolo, G. (2019). Can cities become smart without being sustainable? A systematic review of the literature. Sustainable cities and society45, 348-365. https://doi.org/10.1016/j.scs.2018.11.033
    • This article investigates the question of whether smart city policy and sustainability outcomes are entwined, by reviewing literature that asserts a limitation on the ability of smart cities to achieve sustainability. The authors argue that cities cannot be smart unless they are designed to be sustainable.

Chapter 31. AI and the Global South: Designing for Other Worlds (Chinmayi Arun)⬆︎

  • Ajunwa, I. (2020). The paradox of automation as anti-bias intervention. Cardozo Law Review, 41 (Forthcoming).*
    • This article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The article observes the lack of legal frameworks that take into account the emerging technological capabilities of hiring tools, which make it difficult to detect disparate impact, and argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. The article also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision.
  • Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media20(4), 336-349.*
    • This article proposes that the data relations process is best understood through the history of colonialism. The article proposes that data relations enact a new form of data colonialism, normalizing the exploitation of human beings through data, just as historic colonialism appropriated territory and resources and ruled subjects for profit. The article further argues that data colonialism paves the way for a new stage of capitalism whose outlines can only be glimpsed: the capitalization of life without limit.
  • Couldry, N., & Mejias, U. (2019). Making data colonialism liveable: How might data’s social order be regulated? Internet Policy Review8(2). https://doi.org/10.14763/2019.2.1411
    • This paper argues that while the modes, intensities, scales and contexts of dispossession have changed, the underlying drive of today’s data colonialism remains the same: to acquire “territory” and resources from which economic value can be extracted by capital. The paper further asserts that injustices embedded in this system need to be made “liveable” through a new legal and regulatory order.
  • Dirlik, A. (2007). Global South: Predicament and promise. The Global South1(1), 12-23.*
    • This essay explores possibilities for the establishment of a new global order where the Glonal South may play a central part. It traces the emergence of the concept of the Global South historically, with special attention to its antecedents in the popular term of the 1960s and 1970s, “Third World.” The essay suggests that while the “Third World” is no longer a viable concept geopolitically or as political project, it may still provide an inspiration for similar projects presently that may render the global South into a force in the reconfiguration of global relations.
  • Georgiou, M. (2019). City of refuge or digital order? Refugee recognition and the digital governmentality of migration in the city. Television & New Media20(6), 600-616.
    • This article analyses the digital governmentality of the city of refuge, arguing that digital infrastructures support refugees’ new life in the European city while also normalizing the conditionality of their recognition as humans and as citizens-in-the-making. The article argues that a digital order requires a ‘performed refugeeness’ as precondition for recognition, meaning a swift move from abject vulnerability to resilient individualism.
  • Hagerty, A., & Rubinov, I. (2019). Global AI ethics: A review of the social impacts and ethical implications of artificial intelligence. arXiv preprint arXiv:1907.07892.
    • This article calls for rigorous ethnographic research to better understand the social impacts of AI around the world. Global, on-the-ground research is particularly critical to identify AI systems that may amplify social inequality in order to mitigate potential harms. The article argues that deeper understanding of the social impacts of AI in diverse social settings is a necessary precursor to the development, implementation, and monitoring of responsible and beneficial AI technologies, and forms the basis for meaningful regulation of these technologies.
  • Hicks, J. (2020). Digital ID capitalism: How emerging economies are re-inventing digital capitalism. Contemporary Politics. https://doi.org/10.1080/13569775.2020.1751377
    • This article adds to the literature on digital capitalisms by introducing a new state-led model called ‘digital ID capitalism’. Describing how the system works in India, the article explains how businesses make money from the personal data collected and draws some of its elements into traditional political economy concerns with the relationships between state, business and labor. 
  • Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & Class60(4), 3-26.
    • This article proposes a conceptual framework of how the United States is reinventing colonialism in the Global South through the domination of digital technology. Using South Africa as a case study, it argues that US multinationals exercise imperial control at the architecture level of the digital ecosystem: software, hardware and network connectivity, which then gives rise to related forms of domination. 
  • Madianou, M. (2019). Technocolonialism: Digital innovation and data practices in the humanitarian response to refugee crises. Social Media and Society5(3), 1-13.
    • This article introduces the concept of technocolonialism to capture how the convergence of digital developments with humanitarian structures and market forces reinvigorates and reshapes colonial relationships of dependency. The article argues that the concept of technocolonialism shifts the attention to the constitutive role that data and digital innovation play in entrenching power asymmetries between refugees and aid agencies and ultimately inequalities in the global context. 
  • Madianou, M. (2019). The biometric assemblage: Surveillance, experimentation, profit, and the measuring of refugee bodies. Television & New Media20(6), 581-599.
    • This article analyzes biometrics, artificial intelligence (AI), and blockchain as part of a technological assemblage, which the author terms ‘the biometric assemblage.’ The article argues that the biometric assemblage accentuates asymmetries between refugees and humanitarian agencies and ultimately entrenches inequalities in a global context.
  • Mahler, A. G. (2017) Beyond the colour curtain. In K. Bystrom & J. R. Slaughter (Eds.), The Global South Atlantic (pp. 99-123). Fordham University Press.*
    • This essay traces the roots of the contemporary notion of the Global South to the ideology of an influential but largely forgotten Cold War alliance of liberation movements from Africa, Asia, and Latin American called the Tricontinental. The essay argues that tricontinentalism- the ideology disseminated among the international radical Left through the Tricontinental’s expansive cultural production – revised a specifically black Atlantic resistant subjectivity into a global vision of subaltern resistance that is resurfacing in contemporary horizonalist approaches to cultural criticism such as the global south. In this way, the essay proposes the Global South Atlantic as a particularly useful paradigm that not only inherently recognizes the black Atlantic foundations of the Global South but also calls contemporary solidarity politics into accountability to these intellectual roots.
  • Milan, S., & Treré, E. (2019). Big data from the South(s): Beyond data universalism. Television & New Media20(4), 319-335.*
    • This article introduces the tenets of a theory of datafication, calling for a de-Westernization of critical data studies, in view of promoting a reparation to the cognitive injustice that fails to recognize non-mainstream ways of knowing the world through data. It situates the “Big Data from the South” research agenda as an epistemological, ontological, and ethical program and outlines five conceptual operations to shape this agenda.
  • Ricaurte, P. (2019). Data epistemologies, the coloniality of power, and resistance. Television & New Media20(4), 350-365.*
    • This article develops a theoretical model to analyze the coloniality of power through data and explores the multiple dimensions of coloniality as a framework for identifying ways of resisting data colonization. This article further suggests possible alternative data epistemologies that are respectful of populations, cultural diversity, and environments.
  • Santos, B. D. S. (2016). Epistemologies of the South and the future. From the European South: A transdisciplinary journal of postcolonial humanities, 1, 17-29. http://europeansouth.postcolonialitalia.it/journal/2016-1/3.2016-1.Santos.pdf*
    • This article puts forward epistemologies of the South as resting on the idea that current theoretical thinking in the global North has been based on the idea of an abyssal line. The article proposes a definition of ‘epistemologies of the South’ as a crucial epistemological transformation is required in order to reinvent social emancipation on a global scale, evoking plural forms of emancipation not simply based on a Western understanding of the world.
  • Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online, 192(94), 192–233.*
    • In this research paper, the authors analyze  thirteen jurisdictions that have used or developed predictive policing tools while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, the authors examine the link between unlawful and biased police practices and the data available to train or implement these systems. The authors argues that deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed or unlawful predictions, which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. 
  • Segura, M. S., & Waisbord, S. (2019). Between data capitalism and data citizenship. Television & New Media20(4), 412-419.
    • This article argues that datafication and the opposition to datafication in the South does not develop exactly as in the North given huge political, economic, social, and technological differences in the context of the expansion of digital capitalism. The article analyzes dimensions of data activism in Latin America and discusses the Global South as the site of counter-epistemic and alternative practices, and questions whether the concept of “data colonialism” adequately captures the dynamics of the digital society in areas of well-entrenched digital divides.
  • Shokooh Valle, F. (2020). Turning fear into pleasure: Feminist resistance against online violence in the global south. Feminist Media Studies. https://doi.org/10.1080/14680777.2020.1749692
    • This essay argues that feminist strategies of contestation to online violence in the Global South embody decolonial thought by re-appropriating and fostering the right of marginalized communities to express sexual pleasure online. The essay asserts that activists problematize online violence through two main strategies: first, by anchoring themselves in a southern epistemology that makes explicit the connections between gender-based online violence and broader sociotechnical, historical, and political contexts, and, second, by using activism against online violence, including threats of violence, to advocate for novel forms of online sexual agency and pleasure. Finally, the essay describes how feminist activists reimagine a technological future that is truly emancipatory.
  • Sun, Y., & Yan, W. (2020). The power of data from the global south: Environmental civic tech and data activism in China. International Journal of Communication14(19), 2144-2162.
    • This article explores how an established environmental nongovernmental organization, the Institute of Public and Environmental Affairs (IPE), engaged in data activism around a civic tech platform in China, expanding the space for public participation. By conducting participatory observation and interviews, along with document analysis, the authors describe three modes of data activism that represent different mechanisms of civic oversight in the environmental sphere.
  • Taylor, L., & Broeders, D. (2015). In the name of Development: Power, profit and the datafication of the global South. Geoforum64, 229-237. http://dx.doi.org/10.1016/j.geoforum.2015.07.002*
    • This article identifies two trends in the datafication process underway in low- and middle-income countries (LMICs): first, the empowerment of public–private partnerships around datafication in LMICs and the consequently growing agency of corporations as development actors. Second, the way commercially generated big data is becoming the foundation for country-level ‘data doubles’, i.e. digital representations of social phenomena and/or territories that are created in parallel with, and sometimes in lieu of, national data and statistics. The article explores the resulting shift from legibility to visibility, and the implications of seeing development interventions as a byproduct of larger-scale processes of informational capitalism.
  • West, S. M., Whittaker, M. & Crawford, K. (2019). Discriminating systems: Gender, race, and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html.*
    • This report argues that there is a diversity crisis in the artificial intelligence (AI) industry, and that a profound shift is needed to address this crisis. It puts forward eight recommendations for improving workplace diversity and four recommendations for addressing bias and discrimination in AI systems.
  • Zhang, W., & Neyazi, T. A. (2020). Communication and technology theories from the South: the cases of China and India. Annals of the International Communication Association44(1), 34-49.
    • Using China and India as two cases, this paper reviews the descriptions of communication technology in the two countries and compares the descriptions. Through such comparisons, the paper concludes that the communication technology studies on China and India provide three theoretical insights: firstly, the state-society relationship shapes communication technology; secondly, the increasing pluralization or hybridity of the cyberspace shapes how communication technology is used; and lastly, it is the quest for finding one’s self (or selves) in a Chinese/Indian modernity that could provide references to other contexts.

Chapter 32. Perspectives and Approaches in AI Ethics: East Asia (Danit Gal)⬆︎

  • BAAI. (2019, May 28). Beijing AI Principles. https://baip.baai.ac.cn/en?fbclid=IwAR2HtIRKJxxy9Q1Y953H-2pMHl_bIr8pcsIxho93BtZY-FPH39vV9v9B2eY*
    • This document provides context of the principles proposed as guidelines and initiatives for the research, development, use, governance and long-term planning of AI in Beijing, China.
  • Carrillo, M. R. (2020). Artificial intelligence: From ethics to law. Telecommunications Policy. https://doi.org/10.1016/j.telpol.2020.101937
    • This paper discusses the main normative and ethical challenges imposed by the advancement of artificial intelligence. In particular, the effect on law and ethics created by increasing connectivity and symbiotic interaction among humans and intelligent machines.
  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Classical Ethics in A/IS. In Ethically Aligned Design (1st ed., pp. 36-67). https://standards.ieee.org/industry-connections/ec/autonomous-systems.html*
    • This document released by the Institute of Electrical and Electronics Engineers (IEEE) is a crowdsourced global treatise for ethical development in Artificial and Intelligent Systems. The chapter Classical Ethics in A/IS draws from classical ethical principles to outline guidelines and limitations on AI systems.
  • Ema, A. (2018). EADv2 Regional Reports on A/IS Ethics: Japan. The Ethics Committee of the Japanese Society for Artificial Intelligence. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/eadv2_regional_report.pdf *
    • This document, compiled by the Institute of Electrical and Electronics Engineers (IEEE), consists of reports describing regional attitudes and actions in the field of artificial intelligence.
  • Frumer, Y. (2018). Cognition and emotions in Japanese humanoid robotics. History and Technology, 34(2), 157-183.
    • This paper analyses the creation of artificial humanoid robots, the phenomenon of the ‘uncanny valley’, and current research to overcome the ‘uncanny’ nature of humanoid robots, to argue that development of the field of humanoid robotics in Japan was driven by concern with human emotion and cognition, and shaped by Japanese roboticists’ own associations with the social and intellectual environments of their time.
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
    • This paper explores the debate concerning what constitutes “ethical AI” and which ethical requirements, technical standards and best practices are needed for its realization. The authors present their findings that there is a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy). However, there is substantive divergence in relation to how these principles are interpreted; why they are deemed important; what issue, domain or actors they pertain to; and how they should be implemented.
  • Kovacic, M. (2018). The making of national robot history in Japan: Monozukuri, enculturation and cultural lineage of robots. Critical Asian Studies, 50(4), 572-590.
    • This article discusses Japanese corporate and governmental strategies and mechanisms that are shaping a national robot culture through establishing robot “lineages” and a national robot history which can have significant implications for both humans and robots.
  • Otsuki, G. J. (2019). Frame, game, and circuit: Truth and the human in Japanese human-machine interface research. Ethnos. https://doi.org/10.1080/00141844.2019.1686047
    • This essay tracks the ‘human’ emergent in human-centred technologies (HCTs) in Japan and argues that all HCTs are systems of information and the right machine can approach humanity enough to fulfil even the most human of responsibilities.
  • Park, Y. R., & Shin, S. Y. (2017). Status and direction of healthcare data in Korea for artificial intelligence. Hanyang Medical Reviews, 37(2), 86-92.
    • This paper argues that in the context of medical AI, the general approach that accumulates massive amounts of data based on existing big data concepts cannot provide meaningful results in the healthcare field. Thus, the authors argue that well-curated data is required in order to provide a successful combination of AI and medical care.
  • Peters, D., Vold, K., Robinson, D., & Calvo, R. A. (2020). Responsible AI—two frameworks for ethical design practice. IEEE Transactions on Technology and Society, 1(1), 34-47.
    • This paper presents two complementary frameworks for integrating ethical analysis into engineering practice to address the challenge posed by unintended consequences of artificial intelligence (AI), compounded by the lack of an anticipatory process for attending to ethical impact within professional practice.
  • China Institute for Science and Technology Policy at Tsinghua University. (2018). China AI Development Report 2018. http://www.sppm.tsinghua.edu.cn/eWebEditor/UploadFile/China_AI_development_report_2018.pdf *
    • This document, published by the China Institute for Science and Technology Policy (CISTP) within Tsinghua University in Beijing China aims to provide a comprehensive picture of AI development in China and in the world at large with a view to increasing public awareness, promoting the AI industry development, and serving policymaking.
  • Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. The Chinese approach to artificial intelligence: An analysis of policy and regulation. SSRN. http://dx.doi.org/10.2139/ssrn.3469784
    • Through a compilation of debates and analyses of Chinese policy documents, this paper investigates the socio-political background and policy debates that are shaping China’s AI strategy. There is a focus on the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use.
  • Robertson, J. (2018). Robo sapiens japanicus: Robots, gender, family, and the Japanese nation. University of California Press.*
    • Through an analysis of press releases and public relations videos, this book provides academic discourse of human-robot relations in Japan, and ultimately argues that robots in Japan —humanoids, androids, and animaloids—are “imagineered” in ways that reinforce the conventional sex/gender system and political-economic status quo.
  • Sethu, S. G. (2019). The inevitability of an international regulatory framework for artificial intelligence. In 2019 International Conference on Automation, Computational and Technology Management (ICACTM) (pp. 367-372). IEEE. https://doi.org/10.1109/ICACTM.2019.8776819
    • This paper highlights issues surrounding the manufacture and functioning of autonomous weapons, specifically in the Lethal Autonomous Weapons System (LAWS) as a reason for the need to establish the need for an international regulatory framework for artificial intelligence.
  • Sparrow, R. (2019). Robotics has a race problem. Science, Technology, & Human Values, 45(3), 538-560.
    • This article presents research that shows people are inclined to attribute race to humanoid robots, resulting in an ethical problem that designers of social robots must confront. Thus, the author argues that the only way engineers might avoid this dilemma is to design and manufacture robots to which people will struggle to attribute race, however, this would require rethinking the relationship between robots and “the social,” which sits at the heart of the project of social robotics.
  • Intelligent Robots Development and Distribution Promotion Act. (Act No. 9014, Mar. 28, 2008, Amended by Act No. 9161, Dec. 19, 2008). Statutes of the Republic of Korea. http://elaw.klri.re.kr/eng_mobile/viewer.do?hseq=17399&type=sogan&key=13*
    • This statue describes and dictates the South Korean outlook on artificial intelligence and sets in place guidelines on future development in the field of AI.
  • Weng, Y. H., Hirata, Y., Sakura, O., & Sugahara, Y. (2019). The religious impacts of Taoism on ethically aligned design in HRI. International Journal of Social Robotics, 11(5), 829-839.
    • This paper explores the increasing importance of assessment of robot application and employment in different countries with different cultural backgrounds and focuses on the intersection of religion and automation. This paper aims to analyze what impacts Taoist religion may have on the use of Ethically Aligned Design in future human–robot interaction.
  • Yoo, J. (2015). Results and outlooks of robot education in Republic of Korea. Procedia-Social and Behavioral Sciences, 176, 251-254. https://doi.org/10.1016/j.sbspro.2015.01.468*
    • This paper explores the consequences of the introduction of robotics into the South Korean education system from elementary through to high school, compared to the later introduction at post-secondary level in the United States or Japan. The author then evaluates the results of this policy in context of future prospects in South Korea, and argues that this early introduction gives South Korea a head start in the robotics industry.                                              
  • Zeng, Y., Lu, E., & Huangfu, C. (2018). Linking artificial intelligence principles. arXiv preprint arXiv:1812.04814.*
    • This paper argues that although Artificial Intelligence principles define social and ethical considerations to develop future AI, there exist multiple versions of AI principles, with different considerations covering different perspectives and making different emphasis. Thus, the authors propose Linking Artificial Intelligence Principles (LAIP) an effort and platform for linking and analyzing different Artificial Intelligence Principles.
  • Zhang, B. T. (2016). Humans and machines in the evolution of AI in Korea. AI Magazine, 37(2), 108-112.
    • This article recounts the evolution of AI research in Korea, and describes recent activities in AI, along with governmental funding circumstances and industrial interest.

Chapter 33. Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion (Nagla Rizk)⬆︎

  • Access Partnership. (2018). Artificial intelligence for Africa: An opportunity for growth, development, and democratisation. https://www.accesspartnership.com/artificial-intelligence-for-africa-an-opportunity-for-growth-development-and-democratisation/.*
    • This report argues that the development of artificial intelligence technologies can solve problems that impact Sub-Saharan African countries, providing growth and development in areas such as agriculture, healthcare, and public service.
  • Agarwal, R., & Goswami, P. K. (2019). The role of AI to change the dynamics of entrepreneurial ethics: An estimation of a Third World approach for cyber governance. https://ssrn.com/abstract=3509512
    • This article critically analyzes entrepreneurial ethics, arguing that it has the ability to drive developmental solutions by implementing artificial intelligence programs in specific third world countries.
  • AI Now Institute, New York University. (2018). AI Now Report 2018. https://ainowinstitute.org/AI_Now_2018_Report.pdf*
    • The 2018 AI Now Institute report focuses on five key issues. First, the accountability gap in AI, which favours AI producers rather than the people these technologies are used against. Second, how AI is used to increase surveillance, such as the increased use of facial recognition. Third, government use of emerging technology without pre-existing accountability frameworks. Fourth, the lack of regulation of AI experimentation on human subjects. Fifth, the failure of current solutions in addressing fairness, bias, and discrimination.
  • Almeida, P., Santos, C., & Farias, J. S. (2020). Artificial Intelligence regulation: A meta-framework for formulation and governance. In Proceedings of the 53rd Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2020.647
    • This article provides a meta-framework for the develop of AI regulation that incorporates international public policy stages, including formulation and sustainable governance.
  • Arezki, R., Mottaghi, L., Barone, A., Fan, R. Y., Kiendrebeogo, Y., & Lederman, D. (2018). Middle East and North Africa Economic Monitor, Spring 2018: Economic Transformation. The World Bank. https://openknowledge.worldbank.org/bitstream/handle/10986/30436/9781464813672.pdf?sequence=11&isAllowed=y.*
    • This report examines the development and use in the Middle East and North Africa region, discussing how a digital economy that would create jobs for millions of unemployed young people could be fostered in coming years. To do this, the MENA region must move away from its focus on manufacturing exports, and instead take advantage of the region’s educated youth population, encouraging innovation and entrepreneurship.
  • Brynjolfsson, E., & McAfee, A. (2011). Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Digital Frontier Press.*
    • In their book, Brynjolfsson and McAfee argue that the average human worker cannot keep up with cutting edge technologies such as AI that have the potential to take over their jobs. The implication is that poor employment prospects are not due to lack of advancements, but rather because we are being outdone by technology.
  • Brynjolfsson, E., Rock, D., & Syverson, C. (2017). Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics. National Bureau of Economic Research. www.nber.org/chapters/c14007.pdf*
    • This article argues that although there has been many advancements in AI technology in past years, this has not been met with an increase in productivity. The authors explore four potential explanations for this apparent paradox: false hopes, statistically mismeasurement, redistribution and lags in implementation.
  • Butcher, J., & Beridze, I. (2019). What is the state of artificial intelligence governance globally? The RUSI Journal164(5), 88-96. https://doi.org/10.1080/03071847.2019.1694260
    • Butcher and Beridze summarize current AI governance in both public and private sectors, in research organizations, and at the United Nations. They offer frameworks that can provide guidance to policy makers.
  • Chui, M., Manyika, J., & Miremadi, M. (2017). The countries most (and least) likely to be affected by automation. Harvard Business Review. https://hbr.org/2017/04/the-countries-most-and-least-likely-to-be-affected-by-automation*
    • This article summarizes the research of the authors, which examined the automation potential in 46 countries, accounting for 80% of the global workforce.
  • Cihon, P. (2019). Standards for AI governance: International standards to enable global coordination in AI research & development. Future of Humanity Institute.
    • This report argues that the emergence of AI presents novel problems for policy design, and that a coordinated global response is necessary. Current AI standards development is heavily focused on market efficiency and addressing global concerns, but Cihon worries that this neglects further policy objectives such as creating a culture of responsibility. 
  • Cisse, M. (2018). Look to Africa to advance artificial intelligence. Nature562(7728), 461-462.
    • Cisse argues that AI technology must be developed in a broader range of locations than just Asia, North America and Europe, in order to promote diversity and combat unintended biases. Particularly, development in Africa should be prioritized, as this would not only solve the problem of lack of diversity, but also would provide Africans with access to technology that could improve the lives of citizens.
  • Daly, A., Hagendorff, T., Li, H., Mann, M., Marda, V., Wagner, B., Wang, W. & Witteborn, S., (2019). Artificial intelligence, governance and ethics: Global perspectives. The Chinese University of Hong Kong Faculty of Law Research Paper, (2019-15). https://dx.doi.org/10.2139/ssrn.3414805
    • This report provides an overview on how actors such as governments and private corporations have approached AI regulation and ethics, including regions such as China, Europe, India, and the United States, and companies such as Microsoft.
  • Gordon, M. (2018) Forecasting instability: The case of the Arab spring and the limitations of socioeconomic data. Wilson Center. https://www.wilsoncenter.org/article/forecasting-instability-the-case-the-arab-spring-and-the-limitations-socioeconomic-data*
    • Gordon analyzes data from the Arab Spring, arguing that these uprisings could be predicted, but not down to the exact date and time of their occurance. He argues that similar limitations apply to predicting political and social instability.
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
    • This study investigates whether or not there is a global consensus on any ethical principles pertaining to AI. The results reveal global convergence around five principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy.
  • Rizk, N. Y. H. & Salem, N. (2018). Open data management plan Middle East and North Africa: A guide. MENA Data Platform.
    • This guide contains three documents developed out of the American University in Cairo. First, a background paper explores open data relating to research and development. Second is a data management plan template, made up of a set of questions that, when answered, will provide an Open Data Management plan. Third is the Solar Data Platform Open Data Management Plan, which mapped solar energy in Egypt, and acts as an example of the implementation of the template.
  • Vernon, D. (2019). Robotics and artificial intelligence in Africa [Regional]. IEEE Robotics & Automation Magazine26(4), 131-135. https://doi.org/10.1109/MRA.2019.2946107
    • This article explores how African countries can take advantage of opportunities presented by the rise of artificial intelligence and robots, considering potential solutions to problems that are likely to emerge.
  • Wallach, W., & Marchant, G. E. (2018). An agile ethical/legal model for the international and national governance of AI and robotics. In Control and Responsible Innovation in the Development of AI and Robot. The Hastings Center.
    • This article examines the pacing gap between governments regarding emerging technologies such as AI and argues that the creation of governance coordinating committees would provide a solution.
  • World Economic Forum. (2019). Dialogue series on new economic and social frontiers, shaping the new economy in the fourth industrial revolution. http://www3.weforum.org/docs/WEF_Dialogue_Series_on_New_Economic_and_Social_Frontiers.pdf*
    • This paper examines four emerging challenges at the intersection of economics, technology, and society in the age of the Fourth Industrial Revolution. The paper addresses multiple areas of concern, such rethinking economic value and avenues for creating this value, addressing market concentration, enhancing job creation, and revising social protection.
  • World Economic Forum. (2017). The future of jobs and skills in the Middle East and North Africa: Preparing the region for the fourth industrial revolution. https://www.weforum.org/reports/the-future-of-jobs-and-skills-in-the-middle-east-and-north-africa-preparing-the-region-for-the-fourth-industrial-revolution.*
    • This report asserts that it is vital that the MENA region invest in education to prepare its young population for the contemporary labour market. It presents a call to action to MENA region leaders to ensure that youth are able to fully participate in the global economy.
  • Yamakami, T. (2019). From ivory tower to democratization and industrialization: A landscape view of real-world adaptation of artificial intelligence. In International Conference on Network-Based Information Systems (pp. 200-211). https://doi.org/10.1007/978-3-030-29029-0_19
    • Yamakami examines the concept of democratization and industrialisation of deep learning as a new landscape view for artificial intelligence. He goes on the describe a three-stage model of interaction between a social community and technology.

Chapter 34. Europe’s Struggle to Set Global AI Standards (Andrea Renda)⬆︎

  • Annoni, A., Benczur, P., Bertoldi, P., Delipetrev, P., De Prato, G., Feijoo, C., Fernandez Macias, E., Gomez, E., Iglesias, M., Junklewitz, H., López Cobo, M., Martens, B., Nascimento, S., Nativi, S., Polvora, A., Sanchez, I., Tolan, S., Tuomi, I., & Alujevic, L. V. (2018). Artificial intelligence: A European perspective. Joint Research Centre, European Commission. https://doi.org/10.2760/11251*
    • This extensive report investigates the multitude of practical, technical, legal and ethical issues that the EU must consider when developing laws, policies and regulations regarding AI, data protection and cybersecurity. The researchers propose that the EU must take a unified approach to encourage developments in AI that are socially driven, responsible, ethical and match the core values of civil society.
  • Antonov, A., Kerikmäe, T. (2020). Trustworthy AI as a future driver for competitiveness and social change in the EU. In D. R. Troitiño, T. Kerikmäe, R. de la Guardia, & G. P. Sánchez (Eds.), The EU in the 21st century (pp.135-154). Springer. https://doi.org/10.1007/978-3-030-38399-2_9
    • This article examines the ethical and legal effects of AI technologies that have been promoted and encouraged by the EU in recent years. The authors consider key initiatives in AI governance and seek to identify the main challenges that the EU will face in their goal to become a global leader in the development of trustworthy AI technology.
  • Calzada, I. (2019). Technological sovereignty: Protecting citizens’ digital rights in the AI-driven and post-GDPR algorithmic and city-regional European realm. Regions eZine. https://ssrn.com/abstract=3415889
    • This article explains how the state of AI and data protection regulation in the EU affect citizenship. The author takes a comparative approach and argues that in the EU, citizens are considered to be decision-makers rather than data providers (as is the case in the US and China). He argues that Europe is most likely to adopt a form of ‘technological humanism’ by offering strategic visions of regional AI networks in which governments maintain technological sovereignty to protect their citizens’ digital rights.
  • Carriço, G. (2018). The EU and artificial intelligence: A human-centred perspective. European View17(1), 29-36. https://doi.org/10.1177/1781685818764821
    • This article considers the costs and benefits of AI implementation in the EU context, and argues in support of developing the EU into a global leader of AI innovation. The author argues for a human-centric focus on AI development and emphasizes the use of AI to solve the world’s most challenging societal problems while minimizing risk. The author provides policy recommendations for EU adoption to realize this goal.
  • Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and the ‘good society’: the US, EU, and UK approach. Science and Engineering Ethics24(2), 505-528. https://doi.org/10.1007/s11948-017-9901-7*
    • This paper provides a comparative analysis of policy plans proposed by US, UK and EU governments concerning the integration of AI in society. The authors argue in favor of ‘the good AI society’, and they suggest that although short-term ethical solutions are important, state actors in the US, EU and UK must consider long-term visions and strategies that best promote human flourishing and dignity in the AI context.
  • European Commission. (2018). Coordinated plan on artificial intelligence. https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX:52018DC0795*
    • This communication from the European Commission proposes a plan aimed at coordinating the integration, facilitation and development of AI across the EU. The report suggests that in order to become a world leader in the AI industry, the EU must increase investments in AI, prepare for socio-economic change and develop an ethical and legal framework that ensures AI development is human-centric.
  • European Commission & High Level Expert Group on AI. (2019). Ethics guidelines for trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation*            
    • This report proposes seven ethical principles of trustworthy AI which aim to promote an accountable, human-centric AI for the EU and global contexts. It defines trustworthy AI as that which operates within the law, adheres to ethical principles and is robust such that no unintentional harms are inflicted on society. The report proposes that policymakers must work to ensure that each of these components are simultaneously met.
  • European Commission & High Level Expert Group on AI. (2019). Policy and investment recommendations for trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence*
    • This report follows and supports the European Commission’s guidelines for trustworthy AI and provides thirty-three recommendations to maximize the sustainability, growth and competitiveness of trustworthy AI in the EU. The report stresses the role of EU institutions and member states as critical to the implementation of sound AI governance that promotes benefits and minimizes harms to the public. Suggestions are forwarded with regards to data protection, skills and education, regulation and funding of AI technologies.
  • European Commission. (2018). Working document on liability for emerging digital technologies. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52018SC0137&from=en*
    • This document considers how opportunities and investments in AI can be stimulated by adapting and implementing clear legal frameworks that benefit AI innovators and consumers. The report focusses on the liability challenges in AI and digital technology contexts. The commission calls for an examination of existing safety and liability rules at EU and national levels to determine whether they maintain the appropriate legal certainty required for AI innovation to succeed.
  • European Group on Ethics in Science and New Technologies. (2018). Statement on artificial intelligence, robotics and ‘autonomous’ systems. https://doi.org/10.2777/531856*
    • This statement by the European Group on Ethics considers the legal, ethical, moral and societal questions posed by autonomous technologies, and calls for a more collective and inclusive approach among EU member-states. The report proposes a set of ethical imperatives for autonomous systems that is based on the EU treaties and charters of fundamental rights.
  • Floridi, L. (2019). Establishing the rules for building trustworthy AI. Nature Machine Intelligence1(6), 261-262. https://doi.org/10.1007/s11023-018-9482-5*
    • This article provides a defense of the ethical guidelines proposed by the European Commission’s report on trustworthy AI on the grounds that the guidelines establish a benchmark for which responsible design and international support of human-centric AI solutions can be evaluated.
  • Floridi, L. (2018). Soft ethics, the governance of the digital and the general data protection regulation. Philosophical Transactions of the Royal Society A: Mathematical, Physical  and Engineering Sciences376(2133).. https://doi.org/10.1007/s13347-018-0303-9
    • This article considers the challenges of digital governance and provides a framework of ‘hard’ and ‘soft’ ethics as they relate to digital legislation in the EU. The author then provides an analysis of how this ethical framework works with the development of new, and the adaptation of old, regulation and legislation to assist in digital governance.
  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V.,  C. Luetge, R. Madelin, U. Pagallo, F. Rossi, B. Schafer, P. Valcke and E. Vayena. (2018). AI4People white paper: Twenty recommendations for an ethical framework for a good AI society. Minds and Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5*
    • This article reports the results of the ‘AI4People’ initiative that was designed to formulate an ideal of the ‘good society’ in an AI context. The report analyses the risks and opportunities of societal AI integration and proposes five ethical principles: four of which are drawn from the applied ethics field of bioethics. The report also offers twenty additional recommendations for policy makers which they believe if adopted, would establish a ‘good AI society’.
  • Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Law Market Review, 55, 1143–1186. https://ssrn.com/abstract=3164973
    • This article considers the discriminatory threat imposed by AI applications against protected groups in the EU legal context and argues that this raises complex questions for labor laws in the EU. As explained, existing anti-discrimination laws are not adapted to AI decision-making and issues of proof in the AI context. The article offers a vision of data protection and anti-discrimination law that enforces fairness in algorithmic decision-making.
  • Humerick, M. (2018). Taking AI personally: How the EU must learn to balance the interests of personal data privacy & artificial intelligence. Santa Clara High Technology Law Journal, 34(4), 393-418. https://digitalcommons.law.scu.edu/chtlj/vol34/iss4/3
    • This article considers the influx of AI technology use and its relation to consumer data privacy and protection. The article observes how the EU maintains the most comprehensive regulation for data protection in the world but argues that such strong regulation could discourage future development and innovation of AI in the EU. Unless these issues are addressed, the authors question how future AI developments will thrive in the EU without infringing the provisions of the GDPR.
  • Kullmann, M. (2018). Platform work, algorithmic decision-making, and EU gender equality law. International Journal of Comparative Labour Law and Industrial Relations34(1), 1-21. https://ssrn.com/abstract=3195728
    • This article considers the problems that confront workers in the digital economy and examines the role played by algorithms and their biases in employment and hiring processes. The author observes the existing gender disparity in hiring and salary decisions, and questions whether existing EU equality laws are sufficient for protection of workers when employment-related decisions are made by an algorithm.
  • McMillan, D., & Brown, B. (2019). Against ethical AI. In Proceedings of the Halfway to the Future Symposium 2019 (pp. 1-3).
    • This paper considers the EU guidelines on ethical and trustworthy AI to argue against the focus placed on it and other similar principles, guidelines and manifestos developed for AI. The authors consider how the AI industry and related academia are involved in ‘ethics washing’ and how the development of guidelines may not be as beneficial as previously perceived.
  • Mercer, S. T. (2020). The limitations of European data protection as a model for global privacy regulation. AJIL Unbound114, 20-25. https://doi.org/10.1017/aju.2019.83
    • This article pushes back against the prevailing narrative that EU-style data regulations are becoming a global standard. The author argues that as of 2020, it is too early to determine whether the EU is truly the winner in the race to influence global data protection and privacy law. The author points toward the US as a potential competitor and expects the US regime to differ in its regulatory approach.
  • Mitrou, L. (2018). Data protection, artificial intelligence and cognitive services: Is the general data protection regulation (GDPR) artificial intelligence-proof? SSRN. http://dx.doi.org/10.2139/ssrn.3386914
    • This paper provides a detailed overview of the EU’s General Data Protection Regulation provisions in the context of recent AI technologies. The author observes the changes that AI has made to the processing of personal information and questions whether the current regulations are ‘AI-proof’ and whether new protections and rules need to be implemented in the face of advanced AI technology.
  • Renda, A. (2019). Artificial intelligence: Ethics, governance and policy challenges. Centre for European Policy Studies Task Force.*
    • This article summarizes the results of the Centre for European Policy Studies (CEPS) report on AI in 2018. The report finds that the EU is uniquely positioned to lead the globe in its effort to develop and implement responsible and sustainable AI. The report calls upon member states to focus their agendas on leveraging this advantage to foster further development in the field. The article proposes forty-four recommendations to guide future policy and investment decisions related to the design of lawful, responsible and sustainable AI for the future.
  • Renda, A. (2018). Ethics, algorithms and self-driving cars–a CSI of the ‘trolley problem’. CEPS Policy Insight, (2). https://ssrn.com/abstract=3131522*
    • This article re-examines trolley-problem dilemma and argues against the view that it serves little use as an analogue to the automated driving context. The author engages in an investigation of the problem to reveal a number of neglected policy issues that exist within the dilemma and evade public discussion. The article also argues that current legal frameworks are unable to account for these issues and that these ethical and policy dilemmas must be accounted for in order to appropriately overhaul the relevant public policies in the European context.
  • Smuha, N. A. (2019). The EU approach to ethics guidelines for trustworthy artificial intelligence. CRi-Computer Law Review International, 20(4), 97-106. https://ssrn.com/abstract=3443537
    • This article reviews the AI ethics guidelines offered by the High-Level Expert Group on AI (AI HLEG) established by the European Commission. The author explicates the context, aim and purpose of the guidelines, while considering key issues of AI ethics and governance. The author concludes by positioning the guidelines in an international context and suggests future goals.
  • Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science361(6404), 751-752. https://doi.org/10.1126/science.aat5991
    • This article elaborates on the benefits that AI can offer from a European perspective. The author argues that regulation is not sufficient for the development of ‘good’ AI and that ethics must play a role in the design of technologies by regulating existing regulations to balance the risks and rewards of AI capabilities. The authors argue for the critical importance of a human-centric AI with a view to solving major societal problems.
  • Treleaven, P., Barnett, J., & Koshiyama, A. (2019). Algorithms: law and regulation. Computer52(2), 32-40. https://doi.org//MC.2018.28
    • This article offers important context for the challenges and problems with the regulation of algorithms through legal frameworks and examines their current legality. The authors focus on a variety of algorithmic applications and investigate the associated ethical, legal and technical problems of each, proposing a variety of solutions and suggestions for regulation where they deem it necessary.
  • Villaronga, E. F., Kieseberg, P., & Li, T. (2018). Humans forget, machines remember: Artificial intelligence and the right to be forgotten. Computer Law & Security Review34(2), 304-313. https://doi.org/10.1016/j.clsr.2017.08.007
    • This article explains ‘the right to be forgotten’ and its application to AI, transparency and EU privacy law. The authors consider legal and technical issues of data deletion requirements and regulations to conclude that it may not currently be possible to achieve the legal aims of the ‘right to be forgotten’ in the context of AI applications.
  • Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law7(2), 76-99. https://doi.org/10.1093/idpl/ipx005
    • This article considers the state of AI decision-making in the EU after the implementation of the GDPR which stipulated a legal mandate for a ‘right to explanation’ for all automated decisions. The authors question the existence and feasibility of such a right in current EU laws, and argue that the language in regulation boils down to a ‘right to be informed’. The authors argue that the GDPR lacks the necessary language and explicit rights to protect citizens from problematic automated decision-making.

V. Cases & Applications

Chapter 35. Ethics of Artificial Intelligence in Transport (Bryant Walker Smith)⬆︎

  • Andersen, K. E., Köslich, S., Pedersen, B. K. M. K., Weigelin, B. C., & Jensen, L. C. (2017). Do we blindly trust self-driving cars? In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-robot Interaction (pp. 67-68).
    • This paper reports the findings of a study examining the role of trust in the adoption of artificially intelligent technologies. In a study of simulated autonomous driving scenarios, researchers observed that passengers were often too trusting of AI in cases of emergency where human intervention would have been necessary to prevent harm.
  • Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science352(6293), 1573-1576. https://doi.org/10.1126/science.aaf2654
    • This study considers the social dilemmas that arise in autonomous driving accident scenarios and observes the effect of pre-programmed accident decisions on passenger choices in automated vehicles. In six studies, participants favored self-sacrificing utilitarian AV’s, but admitted that they would not ride in them. Participants were also shown to disprove of any regulation that enforced a utilitarian regime for AV algorithms, leading researchers to conclude that vehicular fatalities could increase by forgoing safer algorithmic options.
  • Borenstein, J., Herkert, J.R. & Miller, K.W. (2019). Self-driving cars and engineering ethics: The need for a system-level analysis. Science and Engineering Ethics 25, 383–398. https://doi.org/10.1007/s11948-017-0006-0
    • This paper argues that individual-level analyses are insufficient for determining the impacts of AI on human life and society. The authors argue that current ethical discussions on transportation and automation must be considered alongside a system-level analysis that considers the interaction between other vehicles and existing transportation systems. The authors observe the need for analysis of instantaneous and coordinated decisions by cars, groups of cars and other technologies, and worry that a rush toward AV’s without coordinated system-level policy and legal considerations could compromise safety and consumer autonomy.
  • Caro, R. A. (1974). The power broker: Robert Moses and the fall of New York. Alfred A. Knopf Incorporated.*
    • This is a biography of Robert Moses – a prominent public official in the urban planning and development of New York City. Moses’ role as urban developer played a significant role in shaping the New York metropolitan area and affected many lives. The biography reveals how his planning led to an arid urban landscape full of public housing failures and barriers to humane living, which (among other things) led to his demise. In spite of these concerns, Moses was able to accomplish his ‘ideal’ urban plan that still bears its remnants New York today. 
  • Coca-Vila, I. (2018). Self-driving cars in dilemmatic situations: An approach based on the theory of justification in criminal law. Criminal Law and Philosophy12(1), 59-82. https://doi.org/10.1007/s11572-017-9411-3
    • This article considers dilemmatic decisions in the context of automated driving and draws from the logic of criminal law to argue for a deontological approach in algorithmic decision-making. The author argues against the common utilitarian logic on the grounds that the maximization of social utility cannot justify negative interference in a person’s legal sphere under a legal system that recognizes individualistic freedoms, rights and responsibilities.
  • Contissa, G., Lagioia, F., & Sartor, G. (2017). The ethical knob: Ethically-customizable automated vehicles and the law. Artificial Intelligence and Law25(3), 365-378. https://doi.org/10.1007/s10506-017-9211-z
    • This article re-considers the notion of pre-programmed AV’s by theorizing the ‘ethical knob’ which enables users to customize their vehicle and choose between various moral principles that would be acted upon by the vehicle in accident scenarios. The vehicle would thus be trusted to act on the user’s decision and the manufacturer would be expected to program the vehicle accordingly. The article subsequently addresses the evident issues of ethics, law and liability that would arise from such a proposal.
  • Douma, F. (2004). Using ITS to better serve diverse populations. Minnesota Department of Transportation Research Services. http://www.cts.umn.edu/Research/ProjectDetail.html?id=2003020*
    • This report investigates how intelligent transportation systems (ITS) can serve the needs of populations that are otherwise unaddressed by conventional transportation planning. The report observes the current state of transport planning as centralized around the single car and acknowledges that this mode of transport is insufficient for diverse populations where cars may be inaccessible. The report presents demographic and survey data on those who would benefit most from ITS applications.
  • Epting, S. (2019). Transportation planning for automated vehicles—or automated vehicles for transportation planning? Essays in Philosophy20(2), 189-205. https://doi.org/10.7710/1526-0569.1635
    • This paper considers the trend of transport planning that centers itself around automated vehicles rather than incorporating them into existing mobility goals. The author observes that self-driving technology is often perceived as a solution for all urban mobility problems, but argues that this view often leads to planning that prioritizes AV’s rather than planning that uses AV’s as a means to achieve broader transit goals. As argued, transport developers should instead focus on planning that is human-centric and aims at sustainability and transportation justice.
  • Ethics Commission on Automated and Connected Driving. (2017). Automated and connected driving. German Federal Ministry of Transport and Digital Infrastructure.https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.html *
    • This is a publication by the Federal Ministry of Transport in Germany, and provides a general overview of ethical and legal problems of automated and connected driving. This report provides twenty guidelines for automated driving and considers ethical and legal policy decisions that must be considered during the programming of autonomous driving software, and how this can be accomplished without displacing the human from the center of AI legal regimes.    
  • Faulhaber, A. K., Dittmer, A., Blind, F., Wächter, M. A., Timm, S., Sütfeld, L. R., Stephan, S., Pipa, G., & König, P. (2019). Human decisions in moral dilemmas are largely described by utilitarianism: Virtual car driving study provides guidelines for autonomous driving vehicles. Science and Engineering Ethics25(2), 399-418. https://doi.org/10.1007/s11948-018-0020-x
    • This article outlines a study that subjected participants to a variety of trolley dilemmas in simulated driving environments. The study observed that participants generally decided based on some utilitarian principle that spared the greatest amount of harm for all parties. The researchers argue this study and its results can provide a justified basis for mandatory utilitarian regimes in all autonomous vehicles, as opposed to customized ethical settings which could yield greater harms in accident scenarios
  • Himmelreich, J. (2018). Never mind the trolley: The ethics of autonomous vehicles in mundane situations. Ethical Theory and Moral Practice 21, 669–684. https://doi.org/10.1007/s10677-018-9896-4
    • This article considers ethical quandaries that arise in ‘mundane’ driving situations and conditions to suggest that these scenarios are far more important and relevant than ‘trolley cases’ because of their specificity and scale. As argued, mundane situations in human driving are matters of intuitive decision-making whereas mundane driving in AI is a matter of policy, where small differences in algorithms could lead to large (possibly unintended) consequences.
  • Kalra, N., & Groves, D. G. (2017). The enemy of good: Estimating the cost of waiting for nearly perfect automated vehicles. Rand Corporation.*
    • This book focusses on the risks and rewards of autonomous vehicles and questions how safe autonomous vehicles must be before they are deployed for consumer use. The report uses a RAND model of automated vehicle safety to compare vehicular fatalities when self-driving vehicles are cleared for use at various levels of capability relative to human ability.  The report concludes that waiting for AI technology to improve is never beneficial and leads to higher fatalities and greater human costs.
  • Millard-Ball, A. (2018). Pedestrians, autonomous vehicles, and cities. Journal of Planning Education and Research38(1), 6-12. https://doi.org/10.1177/0739456X16675674
    • This article considers the interactions between autonomous vehicles and pedestrians in crosswalk yield scenarios. The author argues (as suggested by a model) that the risk-averse nature of autonomous vehicles will confer impunity to pedestrians, which may cause a transformation from automobile-oriented urban neighborhoods to pedestrian-oriented ones. The author notes that with the increased desirability of walking as a form of transportation in pedestrian-oriented cities, the advantages of autonomous driving systems could become questionable.
  • Nyholm, S., & Smids, J. (2018) Automated cars meet human drivers: Responsible human-robot coordination and the ethics of mixed traffic. Ethics Information Technology. https://doi.org/10.1007/s10676-018-9445-9
    • This paper discusses issues of ethics and responsibility that arise from coordination problems in mixed traffic conditions between human and self-driven vehicles. The authors compare human and AI driving patterns to argue that there must be more focus on the ethics of mixed traffic and human-AI interaction.
  • Papa, E., & Ferreira, A. (2018). Sustainable accessibility and the implementation of automated vehicles: Identifying critical decisions. Urban Science2(1), 5. https://doi.org/10.3390/urbansci2010005
    • This article argues that there are a variety of ways that AV’s can impose negative effects on everyday life which must be heavily scrutinized. The authors argue that AV’s have the potential to seriously aggravate accessibility issues, and identify critical decisions that must be made in order to capitalize on the possible accessibility benefits (rather than costs) yielded by AI.
  • Rothstein, R. (2017). The color of law: A forgotten history of how our government segregated America. Liveright Publishing. *
    • This book provides an analysis of contemporary racial segregation throughout American neighbourhoods and argues that this segregation is the result of deliberate government policy rather than commonly referenced factors of wealth and societal prejudice. Rothstein argues that these policies have systematically discriminated against black communities rendering a direct effect on current wealth and education gaps between black and white Americans.  
  • Ryan, M. (2019). The future of transportation: Ethical, legal, social and economic impacts of self-driving vehicles in the year 2025. Science Engineering Ethics. https://doi.org/10.1007/s11948-019-00130-2
    • This article provides a forward-looking outlook concerning the development of automated vehicles (AV) between 2019 and 2025. The author extrapolates the current trajectory of AV technology and policy development to construct a vision of the likely future in 2025. The paper considers legal, social and economic implications of AV deployment including privacy, liability, data governance and safety. The author intends to show how policymakers’ current actions will affect the development of AV in the future.
  • SAE International. (2016). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. https://www.sae.org/standards/content/j3016_201806/ *
    • This document explains autonomous driving systems that perform ‘dynamic driving tasks’ and provides a full taxonomy of relevant definitions and categories of automated driving ranging from no automation (level 0) to full automation (level 5). The terms provided are intended to be used across the autonomous driving industry to maintain coherence and consistency when referring to driving systems.
  • Smith, B. W. (2017). How governments can promote automated driving. New Mexico Law Review, 47(1), 99-138. http://ssrn.com/abstract=2749375 *
    • This article recognizes the common desire among governments to accelerate the development and deployment of automated driving technologies in their respective jurisdictions, and provides steps that can be taken by governments to encourage this process. The author argues that governments must do more than pass ‘autonomous driving laws’ and should instead take a nuanced approach that recognizes the various technologies, applications and applicable laws that apply to autonomous vehicles.
  • Smith, B. W. (2015). Regulation and the risk of inaction. In M. Maurer, J. Gerdes, B. Lenz & H. Winner (Eds.), Autonomes Fahren (pp. 593-609). Springer. *
    • This article considers how risk is allocated in uncertainty and who determines this, in the context of autonomous driving. The author focusses on the role that legislatures, administrative agencies and courts play in developing relevant rules, regulation or verdicts, and proposes eight strategies that can serve as a meta-regulation of these processes in the context of autonomous driving.
  • Sparrow, R., & Howard, M. (2017). When human beings are like drunk robots: Driverless vehicles, ethics, and the future of transport. Transportation Research Part C: Emerging Technologies80, 206-215. https://doi.org/10.1016/j.trc.2017.04.014
    • This article pushes back against the prevailing narrative that autonomous vehicles will save lives by observing that many automated systems are dependent on human supervision which produces more dangerous outcomes than anticipated. However, once vehicles become fully autonomous the authors argue against the moral permissibility of manual driving.
  • Taeihagh, A., & Lim, H. S. M. (2019). Governing autonomous vehicles: Emerging responses for safety, liability, privacy, cybersecurity, and industry risks. Transport reviews39(1), 103-128. https://doi.org/10.1080/01441647.2018.1494640
    • This article assesses the risks of automated vehicles and available solutions for governments to address them. The authors conclude that governments have largely avoided stringent and legally-binding measures in an effort to encourage future AI development. They provide some data and analysis from US, UK and Germany to observe that while these countries have taken some steps toward legislation, most others have not implemented any specific strategy that acknowledges issues presented by AI.
  • Uniform Law Commission. (2019). Uniform automated operation of vehicles act. https://www.uniformlaws.org/committees/community-home?CommunityKey=4e70cf8e-a3f4-4c55-9d27-fb3e2ab241d6*
    • This is a proposed legislative document that concerns the regulation and operation of autonomous vehicles. The act covers the deployment and licensing process of automated vehicles on public roads, and attempts to adapt existing US vehicle codes to accommodate for this deployment. The act also stresses the need for a legal entity to address issues of vehicle licensing, ownership, liability and responsibility.
  • United Nations Global Forum for Road Traffic Safety. (2018). Resolution on the deployment of highly and fully automated vehicles in road traffic. https://undocs.org/pdf?symbol=en/ECE/TRANS/WP.1/2018/4/REV.3 *
    • This is a UN resolution that is dedicated to road safety and the safe deployment of self-driving technologies on public roads. The resolution is not legally binding but intended to serve as a guide for nations dealing with the implementation of autonomous technologies. It offers recommendations to ensure safe interaction between autonomous and conventional driving technology.
  • United States Department of Transportation. (2018). Preparing for the future of transportation: Automated vehicles 3.0. https://www.transportation.gov/av/3 *           
    • This is the third iteration of a report developed by the US Department of Transportation (DOT) which is intended to highlight the DOT’s interest in promoting safe, reliable and cost-effective deployment of automated technologies into various modes of surface transportation. The report includes six principles to guide policy and five strategies for implementation based on the principles.
  • Wolkenstein, A. (2018). What has the trolley dilemma ever done for us (and what will it do in the future)? On some recent debates about the ethics of self-driving cars. Ethics and Information Technology, 20(3), 163-173. https://doi.org/10.1007/s10676-018-9456-6
    • This article considers how the trolley problem is often cited in literature and public debates related to autonomous vehicles by claiming to provide practical guidance on AI ethics for self-driving cars. Through an analysis of relevant sources, the author argues that although the philosophical considerations bestowed by the trolley problem may be theoretically worthwhile, the trolley problem is ultimately unhelpful in programming and passing legislation for automated driving technologies.

Chapter 36. The Case for Ethical AI in the Military (Jai Galliott and Jason Scholz)⬆︎

  • Arkin, R. (2009). Governing lethal behavior in autonomous robots. CRC Press.*
    • This article argues in favor of, and presents a framework for, the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system, such that the system adheres to the Laws of War and Rules of Engagement.
  • Arkin, R. C. (2010). The case for ethical autonomy in unmanned systems. Journal of Military Ethics, 9(4), 332-341.
    • This article appeals to ongoing and foreseen technological advances, and assessments of human abilities as forces of warfare to argue in favor of the ethical autonomy of lethal autonomous unmanned systems. In addition to their capacity for autonomy, the article argues that these systems will potentially be capable of performing more ethically on the battlefield than human soldiers.
  • Asaro, P. (2019). Algorithms of violence: Critical social perspectives on autonomous weapons. Social Research, 86(2), 537-555.
    • This paper takes a critical long-term view toward lethal autonomous weapons systems (LAWS), and investigates how the development and widespread adoption and use of these systems might transform the politics and economics of our societies. The paper argues that if we are to have any hope of reining in the power exerted by algorithms over our political, economic, and social lives, and of shaping a future technology that supports democratic values and human rights, it is essential that algorithms of violence are seen as unacceptable.
  • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon  J.F., & Rahwan, I. (2018). The moral machine experiment. Nature, 563(7729), 59-64.*
    • This article aims to address concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide these machines. The authors utilize the Moral Machine, an online experimental platform, to gather data which is analyzed to come to a recommendation as to how machine decision making should be determined.
  • Bhuta, N., Beck, S., Geiβ, R., Liu, H., & Kreβ, C. (Eds.). (2016). Autonomous weapons systems: Law, ethics, policy. Cambridge University Press. https://doi.org/10.1017/CBO9781316597873
    • This collection combines contributions from roboticists, legal scholars, philosophers and sociologists of science in order to recast the debate over autonomous weapons systems in a manner that clarifies key areas and articulates questions for future research. The contributors develop insights with direct policy relevance, including who bears responsibility for autonomous weapons systems, whether they would violate fundamental ethical and legal norms, and how to regulate their development.
  • Bloom, P. (2020). Identity, institutions and governance in an AI world: Transhuman relations. Springer Nature.
    • This book analyzes the relationship between humanity and AI, and develops a framework for future relations based on infusing programming with values of social justice, protecting the rights and views of all forms of “consciousness” and creating the structures and practices necessary for encouraging a culture of “mutual intelligent design.”
  • Chandler, K. (2020). Unmanning: How humans, machines and media perform drone warfare. Rutgers University Press.
    • The key contributions that Unmanning makes to the field of critical military studies are to problematize what drones and unmanned aircraft are through an analysis of history, to demonstrate how networked actions between human and nonhuman that comprise unmanned aircraft operate through duplicity, and to examine the failures central to the development, experimental use, and deployment of drones that are at once technological, social, and political.
  • Enemark, C. (2013). Armed drones and the ethics of war: Military virtue in a post-heroic age. Routledge.
    • This book assesses the ethical implications of using armed unmanned aerial vehicles in contemporary conflicts, by analyzing them in context of ethical principles that are intended to guard against unjust increases in the incidence and lethality of armed conflict. The book weighs evidence that indicates that the use of armed drones is to be welcomed as an ethically superior mode of warfare against the argument that continued and increased use may ultimately do more harm than good.
  • Galliott, J. (2015). Military robots: Mapping the moral landscape. Ashgate Publishing, Ltd.*
    • This book uses the lens of the rise of drone warfare to explore and analyze the moral, political and social questions that have arisen in the contemporary era of warfare. Some examples of these issues are concerns of who may be legitimately targeted in warfare, the collateral effects of military weaponry and the methods of determining and dealing with violations of the laws of war.
  • Galliott, J. (2016). Defending Australia in the digital age: Toward full spectrum defence. Defence Studies, 16(2), 157-175.*
    • This paper argues that Australia’s defense strategy is incomplete or at least inefficient. The author argues this is the consequence of a crippling geographically focused strategic dichotomy, caused by the armed forces historically having been structured to venture afar as a small part of a large coalition force or, alternatively, to combat small regional threats across land, sea, and air.
  • Galliott, J. (2017). The limits of robotic solutions to human challenges in the land domain. Defence Studies, 17(4), 327-345.*
    • This article explores the limits of robotic solutions to military problems, encompassing technical limitations and redundancy issues that point to the need to introduce a framework compatible with the adoption of robotics while preserving existing levels of human staffing.
  • Horowitz, M. C. (2016). The ethics & morality of robotic warfare: assessing the debate over autonomous weapons. Daedalus145(4), 25-36.
    • This essay describes and assesses the ongoing debate over autonomous weapons, focusing on the ethical implications of whether autonomous weapons can operate effectively, whether human accountability and responsibility for autonomous weapon systems are possible, and whether delegating life and death decisions to machines inherently undermines human dignity. The concept of lethal autonomous weapon systems (LAWS) is extremely broad and this essay considers LAWS in three categories: munition, platforms, and operational systems.
  • Lara, F., & Deckers, J. (2019). Artificial intelligence as a Socratic assistant for moral enhancement. Neuroethics. https://doi.org/10.1007/s12152-019-09401-y
    • This article first explores the issue of human enhancement to increase morality. The article then presents an argument in favor of the use of AI as preferable to achieving this goal as opposed to the use of biotechnology.
  • Laukyte, M. (2017). Artificial agents among us: Should we recognize them as agents proper? Ethics and Information Technology, 19(1), 1-17.
    • This article explores the issue of recognizing agency in artificial agents or nonhuman agents. The author argues that in order for an artificial agent to be recognized as an agent with a claim to rights, it will need to meet the same basic conditions of agency that group agents granted such recognition present with and concludes that artificial agents do not meet these conditions.
  • Leben, D. (2018). Ethics for robots: How to design a moral algorithm. Routledge.*
    • In this book, Leben describes and defends a framework for designing and evaluating ethical algorithms that will govern autonomous machines. Furthermore, the book argues that these algorithms should be evaluated by how effectively they accomplish the problem of cooperation among self-interested organisms, and therefore, must be catered to the artificial subjects at hand, rather than being created based to simulate evolved psychological systems.
  • Leveringhaus, A. (2018). What’s so bad about Killer Robots? Journal of Applied Philosophy, 35, 341-358. https://doi.org/10.1111/japp.12200
    • Offering a precise take on the relevant conceptual issues, the article contends that Killer Robots are best seen as executors of targeting decisions made by their human programmers. However, the article asserts that from a normative perspective, the execution of targeting decisions by Killer Robots should worry us. The article argues that what is morally bad about Killer Robots is that they replace human agency in warfare with artificial agency, a development which should be resisted.
  • Lin, P., Abney, K., & Jenkins, R. (Eds.). (2017). Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.*
    • This book presents a wide and updated range of contemporary ethical issues facing the field of robotics, utilizing new use-cases for robots and their challenges to build a global representation of the contemporary questions in the field.
  • Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. California Polytechnic State University San Luis Obispo.*
    • This paper presents and explores the issues that need to be considered in responsibly introducing advanced technologies into the battlefield and, eventually, into society. It argues for the presumptive case for the use of autonomous military robotics, but then goes on to consider various issues that come with this decision, for example: the need to address risk and ethics in the field, ethical and social issues, both near- and far-term, and recommendations for future work.
  • McMahan, J. (2013). Killing by remote control: The ethics of an unmanned military. Oxford University Press.
    • This text explores the ethical permissibility of the use of unmanned mediated mechanisms in warfare. It includes discussions of broader issues such as the just war tradition and the ethics of war, as well as more specific issues surrounding the use of drones, such as what are known as “targeted killing” by the United States.
  • Parks, L. & Kaplan, C. (Eds.). (2017). Life in the age of drone warfare. Duke University Press.
    • This book brings together scholars and artists to explore the historical, geopolitical, and cultural dimensions of drone warfare. Contributors explore drones in three critical aspects: first, the juridicial dimensions of drone warfare are investigated in relation to systems of governance and “law fare”; second, drones are considered through the registers of the sensory and the perceptual; and, third, the book inquires into the ways in which power works through bio political technologies.
  • Righetti, L., Pham, Q., Madhavan, R., and Chatila, R. (2018). Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues]. IEEE Robotics & Automation Magazine, 25(1), 123-126. https://doi.org/10.1109/MRA.2017.2787267.
    • This column reviews the main issues raised by the increase of autonomy in weapon systems and the state of the international discussion. It argues that the robotics community has a fundamental role to play in these discussions, to provide the often-missing technical expertise necessary to frame the debate and promote technological development in line with the IEEE Robotics and Automation Society (RAS) objective of advancing technology to benefit humanity.
  • Roff, H. M. (2014). The strategic robot problem: Lethal autonomous weapons in war. Journal of Military Ethics13(3), 211-227, https://doi.org/10.1080/15027570.2014.975010
    • This paper argues that we must look to the targeting process if we are to gain a fuller picture of the consequences of creating or fielding lethal autonomous robots. This paper argues that once we look to how militaries actually create military objectives, and thus identify potential targets, we face an additional problem: the Strategic Robot Problem. The ability to create targeting lists using military doctrine and targeting processes is inherently strategic and handing this capability over to a machine undermines existing command and control structures and renders the use for humans redundant. The Strategic Robot Problem provides prudential and moral reasons for caution in the race for increased autonomy in war.
  • Scholz, J., & Galliott, J. (2018). Artificial intelligence in weapons: The moral imperative for minimally-just autonomy. US Air Force Journal of Indo-Pacific Affairs, 1(2), 57-67.*
    • This article argues that to allow military power to be lawful and morally just future autonomous artificial intelligence (AI) systems must not commit humanitarian errors. Therefore, the authors propose a preventative form of minimally-just autonomy using artificial intelligence (MinAI). This would avert attacks on protected symbols, sites, and recognize signals of surrender.
  • Sparrow, R. (2009). Building a better WarBot: Ethical issues in the design of unmanned systems for military applications. Science and Engineering Ethics, 15(2), 169-187.*
    • This article explores how designers of unmanned military systems must consider ethical, as well as operational, requirements and limits when developing such systems. The author presents two groups of such ethical issues, Building Safe Systems and Designing for the Law of Armed Conflict.
  • Sullins, J. P. (2006). When is a robot a moral agent. In M. Anderson & S. L. Anderson (Eds.), Machine Ethics (pp. 151-160). Cambridge University Press.
    • This paper argues that in certain circumstances robots can be seen as real moral agents, under specific conditions. The robot must be first, significantly autonomous from any programmers or operators of the machine, second the robot’s behavior must have an ‘intention’, and finally, the robot must behave in a way that shows an understanding of responsibility to some other moral agent.
  • Vakkuri, V., & Abrahamsson, P. (2018). The key concepts of ethics of artificial intelligence. In 2018 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC) (pp. 1-6). IEEE.
    • This paper presents a philosophical conceptualization as a framework to form a practical implementation model for ethics of AI, by mapping keywords used in the field of AI to identify the key issues in the field as well as guide future explorations.
  • Winfield, A. F., Michael, K., Pitt, J., & Evers, V. (2019). Machine ethics: The design and governance of ethical AI and autonomous systems. Proceedings of the IEEE, 107(3), 509-517.
    • This paper argues that the rise of robotics, AI, autonomous systems and information technology is not solely an academic concern, but a matter for political as well as public debate. This paper thus collates and presents various perspectives on future governance and engineering of such technology.

Chapter 37. The Ethics of AI in Biomedical Research, Patient Care, and Public Health (Alessandro Blasimme and Effy Vayena)⬆︎

Biomedical Research

  • Blasimme, A., & Vayena, E. (2016). “Tailored-to-You”: Public engagement and the political legitimation of precision medicine. Perspectives in Biology and Medicine, 59(2), 172-188.
    • This article outlines a detailed history of personalized medicine in its sociotechnical and legislative context in the United States, with a particular focus on the 2015 federal Precision Medicine Initiative. The authors emphasize the interplay between scientific and social factors, especially the importance of a “participatory ethos” and public engagement in building political support for innovative biomedical paradigms.  
  • Geneviève, L. D., Martani, A., Shaw, D., Elger, B. S., & Wangmo, T. (2020). Structural racism in precision medicine: Leaving no one behind. BMC Medical Ethics21(1), 1-13.
    • This paper examines precision medicine through the lenses of structural racism and equity. The authors examine how systemic racism can impact the behavior of precision medicine through impacts on the initial data generation processes, the data analytical processes, and the final implementation of models. They warn against the possibility for machine learning technologies to exacerbate these structural problems and offer a range of potential solutions at each step in the precision medicine process.
  • Hollister, B., & Bonham, V. L. (2018). Should electronic health record-derived social and behavioral data be used in precision medicine research? AMA journal of ethics20(9), 873-880.
    • This article explores the ethical and practical issues surrounding the inclusion of social and behavioral information from electronic health records in in precision medicine research. The authors argue that this data is often inconsistently collected and of low quality, and that the sensitive nature of this data presents a significant risk of patient harm if it is misused.
  • Ienca, M., Ferretti, A., Hurst, S., Puhan, M., Lovis, C., & Vayena, E. (2018). Considerations for ethics review of big data health research: A scoping review. PloS one, 13(10). https://doi.org/10.1371/journal.pone.0204937*
    • The methodological novelty and computational complexity of big data health research raises novel challenges for ethics review. This paper reviews the literature to identify and map the major challenges of health-related big data for Ethics Review Committees. The findings suggest that while big data trends in biomedicine hold the potential for advancing clinical research, improving prevention and optimizing healthcare delivery, several epistemic, scientific and normative challenges need careful consideration.
  • Landry, L. G., Ali, N., Williams, D. R., Rehm, H. L., & Bonham, V. L. (2018). Lack of diversity in genomic databases is a barrier to translating precision medicine research into practice. Health Affairs, 37(5), 780-785.*
    • Precision medicine often uses molecular biomarkers to assess patients’ prognosis, and therapeutic response more precisely. This paper examines which populations were included in studies using two public genomic databases, and found significantly fewer studies of African, Latin American, and Asian ancestral populations compared to European populations. While the number of genomic research studies that include non-European populations is improving, the overall numbers are still low, representing potential for inequities in precision medicine applications.
  • Park, S. H., Kim, Y. H., Lee, J. Y., Yoo, S., & Kim, C. J. (2019). Ethical challenges regarding artificial intelligence in medicine from the perspective of scientific editing and peer review. Science Editing. https://doi.org/10.6087/kcse.164
    • This review article highlights several aspects of research studies on artificial intelligence (AI) in medicine that require additional transparency and explain why additional transparency is needed. Transparency regarding training data, test data and results, interpretation of study results, and the sharing of algorithms and data are major areas for guaranteeing ethical standards in AI research.
  • Vayena, E., & Blasimme, A. (2017). Biomedical big data: New models of control over access, use and governance. Journal of Bioethical Inquiry, 14(4), 501-513.
    • This article challenges the notion that the collection of biomedical big necessitates a loss of individual control. Rather it proposes three approaches to empowering the individual through: (1) data portability rights, (2) new mechanisms of informed consent, and (3) new schemes of participatory governance.
  • Vayena, E., & Blasimme, A. (2018). Health research with big data: Time for systemic oversight. The Journal of Law, Medicine & Ethics, 46(1), 119-129.*
    • This article proposes a new paradigm for the ethical oversight of biomedical research in alignment with the ubiquity of big data as opposed to suggesting updates and fixes for existing models. This paradigm, systemic oversight, is based on six core features: (1) adaptivity, (2) flexibility, (3) monitoring, (4) responsiveness, (5) reflexivity, and (6) inclusiveness.
  • Vollmer, S., Mateen, B. A., Bohner, G., Király, F. J., Ghani, R., Jonsson, P., Cumbers, S., Jonas, A., McAllister, K. S. L., Myles, P., Grainger, D., Birse, M., Branson, R., Moons, K. G. M., Collins, G. S., Ioannidis, J. P. A., Holmes, C., & Hemingway, H. (2020). Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ, 368. https://doi.org/10.1136/bmj.l6927
    • Structured around a series of twenty “critical questions” to be asked during the development process, this article explores issues of transparency, replicability, ethics, and effectiveness in the implementation of AI in clinical medicine. The authors emphasize the complex socio-technical context into which these algorithms are implemented and discuss necessary requirements for AI to be rigorously considered effective in clinical practice.
  • Wiens, J., Saria, S., Sendak, M., Ghassemi, M., Liu, V. X., Doshi-Velez, F., Jung, K., Heller, K., Kale, D., Saeed, M., Ossorio, P. N., Thadaney-Israni, S., & Goldenberg, A. (2019). Do no harm: A roadmap for responsible machine learning for health care. Nature Medicine, 25(9), 1337–1340. https://doi.org/10.1038/s41591-019-0548-6
    • This article engages with the issue of responsible machine learning in healthcare from the perspective of interdisciplinary model development and deployment teams. On the development side, the authors outline concerns related to selecting the right problems, developing clinically useful solutions, considering the proximal and distal ethical implications of such solutions, and evaluating the resulting models in rigorous and consistent ways. On the implementation side, they outline issues related to deployment, marketing, and results-reporting for these models.

Clinical Medicine

  • Blasimme, A., & Vayena, E. (2016). Becoming partners, retaining autonomy: Ethical considerations on the development of precision medicine. BMC Medical ethics, 17(1), 67.
    • This article explores the challenge of engaging patients and their perspectives in the precision medicine clinical research process. The authors explore the normative construction of research participation and partnership, as well as tensions between individual and collective interests. They advocate for the concept of “respect for autonomous agents” (as opposed to autonomous action or choice) as a potential mechanism for resolving these ethical tensions.
  • Blasimme, A., Vayena, E., & Van Hoyweghen, I. (2019). Big data, precision medicine and private insurance: A delicate balancing act. Big Data & Society, 6(1). https://doi.org/10.1177/2053951719830111
    • Using national precision medicine initiatives as a case study, this article explores the tension between private insurers leveraging repositories of genetic and phenotypic data for economic gain and the utility of these databases as a public, scientific resource. Although the authors admit that information asymmetry between insurance companies and their policy-holders still leads to risks in reduced research participation, adverse selection, and discrimination, they argue that a governance model underpinned by trustworthiness, openness, and evidence can balance these competing interests.
  • Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group. (2019). Canadian Association of Radiologists white paper on ethical and legal issues related to artificial intelligence in radiology. Canadian Association of Radiologists’ Journal, 70(2), 107-118.
    • Radiology is positioned to lead development and implementation of AI algorithms. This white paper from the Canadian Association of Radiologists provides a framework for study of the legal and ethical issues related to AI in medical imaging, including patient data (privacy, confidentiality, ownership, and sharing), algorithms (levels of autonomy, liability, and jurisprudence); practice (best practices and current legal framework), and finally, opportunities in AI from the perspective of a universal health care system.
  • Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231-237.*
    • This paper outlines a set of short-term and medium-term clinical safety issues raised by machine learning enabled decision-making software. This framework is supported by a set of quality control questions that are designed to help clinical safety professionals and those involved in developing ML systems to identify areas of concern. The authors encourage rigorous testing of new ML systems through randomized control testing, and by comparing to existing practices.
  • Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing machine learning in health care—addressing ethical challenges. The New England journal of medicine378(11), 981-983.
    • This article discusses ethical challenges in the clinical implementation of machine learning systems. In addition to more “straightforward” ethical challenges such as bias and discrimination, the authors discuss “less obvious” risks, such as algorithms being incentivized toward high-profit care, providing excessive legitimacy to medically uncertain decisions, or undermining the clinical experience of physicians. They outline a call for reshaping both medical education and codes of medical ethics in light of these concerns.
  • Chen, I. Y., Joshi, S., & Ghassemi, M. (2020). Treating health disparities with artificial intelligence. Nature Medicine26(1), 16-17.
    • This article argues that while substantial concerns exist about algorithms amplifying bias in medicine, algorithms may also play an important role in identifying and correcting disparities. The authors advocate for understandings of the ethics of AI in healthcare to extend beyond the question of algorithmic fairness, and toward better consideration of the systemic and socioeconomic context of health disparity.
  • Chin-Yee, B., & Upshur, R. (2019). Three problems with big data and artificial intelligence in medicine. Perspectives in Biology and Medicine, 62(2), 237-256.
    • This paper engages with three important philosophical challenges facing “big data” and artificial intelligence in medicine. The authors outline an epistemological-ontological challenge related to the theory laden-ness of big data and measurement, an epistemological-logical challenge related to inherent limits of algorithms, and a phenomenological challenge related to irreducibility of human experience to quantitative data. They argue for the importance of the artificial intelligence in medicine movement engaging with its philosophical foundations.
  • Di Nucci, E. (2019). Should we be afraid of medical AI? Journal of Medical Ethics, 45(8), 556-558
    • This paper argues against ideas that AI represents a threat to patient autonomy. The paper states these ideas often conflate machine learning with AI, miss machine learning’s potential for personalized medicine through big data, and fail to distinguish between evidence-based advice and decision-making within healthcare. Which tasks machine learning performs within healthcare is a crucial question, but care must be taken in distinguishing between the different systems and different delegated tasks.
  • Evans, E. L., & Whicher, D. (2018). What should oversight of clinical decision support systems look like? AMA Journal of Ethics20(9), 857-863.
    • This article engages with the use of clinical decision support systems in medicine, arguing that such systems should be subject to ethical and regulatory oversight above and beyond that of normal clinical practice. The authors outline a framework for the development and use of these systems with an emphasis on articulating proper conditions for use, including processes for monitoring data quality and algorithm performance, and protecting patient data.
  • Ferretti, A., Schneider, M., & Blasimme, A. (2018). Machine learning in medicine: Opening the new data protection black box. European Data Protection Law Review, 4(3), 320-332. https://doi.org/10.21552/edpl/2018/3/10
    • Certain approaches to artificial intelligence, notably deep learning, have drawn criticisms due to their relative inscrutability to human understanding (the “black box” metaphor). This article examines how the black box opacity of machine learning systems in medicine can be categorized in three forms: (1) lack of disclosure on if automated decision-making is taking place, (2) epistemic opacity on how an AI system arrives at a specific outcome, and (3) explanatory opacity on why an AI system provides a specific outcome. Moreover, this article takes a solution-driven approach through discussing how each of the types of opacity identified can be addressed through the General Data Protection Regulation.
  • Ficuciello, F., Tamburrini, G., Arezzo, A., Villani, L., & Siciliano, B. (2019). Autonomy in surgical robots and its meaningful human control. Paladyn, Journal of Behavioral Robotics10(1), 30-43.
    • Focusing on the lens of “Meaningful Human Control” (a term extended from autonomous weapons literature), this paper engages with ethical issues arising from increasing levels of autonomy in surgical robots. The authors review the potential for robotic assistance in minimally invasive surgery and microsurgery and discuss a theoretical framework for levels of surgical robot autonomy based around several levels of “Meaningful Human Control”, each with different burdens of human responsibility and oversight.
  • Fiske, A., Henningsen, P., & Buyx, A. (2019). Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet research, 21(5). https://doi.org/10.2196/13216
    • This paper assesses the ethical and social implications of translating AI applications into mental health care across the fields of Psychiatry, Psychology and Psychotherapy. After a literature search, the paper finds that that AI is a promising approach across the field of mental health; however, further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental health care.
  • Gerke, S., Yeung, S., & Cohen, I. G. (2020). Ethical and legal aspects of ambient intelligence in hospitals. JAMA, 323(7), 601-602.
    • Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces (e.g. video capture to monitor for hand hygiene, patient movements, etc.), and of the use of that information to assist healthcare workers in delivering quality care. This commentary discusses potential issues these practices raise around patient privacy and reidentification risk, consent, and liability.
  • He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., & Zhang, K. (2019). The practical implementation of artificial intelligence technologies in medicine. Nature medicine25(1), 30-36.
    • This article explores practical issues that exist regarding the implementation of AI in clinical workflows, including data sharing difficulties, privacy issues, transparency problems, and concerns for patient safety. The authors argue that these practical issues are global in scope, and engage in a comprehensive comparative discussion of the medical AI regulatory environments in the United States, Europe, and China.
  • Ho, C. W. L., Soon, D., Caals, K., & Kapur, J. (2019). Governance of automated image analysis and artificial intelligence analytics in healthcare. Clinical radiology, 74(5), 329-337.
    • This paper discusses the nature of AI governance in biomedicine along with its limitations. It argues that radiologists must assume a more active role in propelling medicine into the digital age, including inquiring into the clinical and social value of AI, alleviating deficiencies in their technical knowledge to facilitate ethical evaluation, supporting the recognition and removal of biases, engaging the “black box” obstacle, and brokering a new social contract on informational use and security.
  • Lamanna, C., & Byrne, L. (2018). Should artificial intelligence augment medical decision-making? The case for an autonomy algorithm. AMA Journal of Ethics20(9), 902-910.
    • The authors of this article put forward the concept of an “autonomy algorithm”, which might be used to integrate data from social media and electronic health records in order to estimate the likelihood that an incapable patient would have consented to a particular course of treatment. They explore ethical and practical issues in the construction and implementation of such an algorithm, and ultimately argue that it would likely be more reliable and less liable to bias than existing substitute decision-making methods.
  • Luxton, D. D. (2014). Recommendations for the ethical use and design of artificial intelligent care providers. Artificial intelligence in medicine, 62(1), 1-10.
    • This paper identifies and reviews ethical issues associated with artificial intelligent care providers in mental health care and other helping professions. It finds that existing ethics codes and practice guidelines do not presently consider the current or the future use of interactive artificial intelligent agents to assist and to potentially replace mental health care professionals. Specific recommendations are made for the development of ethical codes, guidelines, and the design of these systems.
  • Martinez-Martin, N., Dunn, L. B., & Roberts, L. W. (2018). Is it ethical to use prognostic estimates from machine learning to treat psychosis? AMA Journal of Ethics20(9), 804-811.
    • Building on the case study of a recent machine learning model for predicting prognosis for patients with psychosis, this article engages with the ethics of AI in psychiatry specifically, as well as the ethics of implementing innovation in clinical medicine more broadly. In particular, the authors examine the burdens that are placed upon physicians in understanding and engaging with novel technologies, and the challenges with communicating risks sufficiently to enable informed consent.
  • McDougall, R. J. (2019). Computer knows best? The need for value-flexibility in medical AI. Journal of Medical Ethics, 45(3), 156–160. https://doi.org/10.1136/medethics-2018-105118
    • Focusing on the case study of IBM’s “Watson for Oncology”, this paper engages with issues related to shared decision-making in medical AI. The author argues that the use of fixed and covert value judgments underlying AI systems risks excluding patient perspectives and increasing medical paternalism. Conversely, she argues that AI systems can be “value-flexible” if developed to explicitly incorporate patient values and perspectives, and in doing so may remedy existing challenges in shared decision-making. 
  • Nebeker, C., Torous, J., & Ellis, R. J. B. (2019). Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Medicine, 17(1), 137. https://doi.org/10.1186/s12916-019-1377-7
    • Placing a particular focus on direct-to-consumer digital therapeutics, this article examines the current ethical and regulatory environment for digital health. The authors describe the current situation as a “wild west” with little regulation and identify gaps and opportunities in terms of building interdisciplinary collaboration, improving digital literacy, and developing ethical standards. They conclude by summarizing several initiatives already underway to address these gaps 
  • Nundy, S., Montgomery, T., & Wachter, R. M. (2019). Promoting trust between patients and physicians in the era of artificial intelligence. Jama, 322(6), 497-498.
    • This paper discusses how AI will affect trust between physicians and patients. The three components of trust are defined as competency, motive and transparency, and it is explored whether AI enabled health applications may impact each of these domains. The paper concludes that by reaffirming the foundational importance of trust to health outcomes and engaging in deliberate system transformation, the benefits of AI can be realized while strengthening patient-physician relationships.
  • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
    • This paper engages in a quantitative analysis and discussion of racial bias in a commercial algorithm for stratifying the risk of patients with chronic disease. The authors quantitatively uncover that the algorithm unfairly classifies black patients as requiring less care than white patients of equivalent acuity, and explore further to determine that this disparity arises from using cost of care as a surrogate for health needs, and failing to consider structural disparity. They offer discussion of measures that can be taken to avoid similar problems.
  • O’Sullivan, S., Nevejans, N., Allen, C., Blyth, A., Leonard, S., Pagallo, U., & Ashrafian, H. (2019). Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. The International Journal of Medical Robotics and Computer Assisted Surgery, 15(1). https://doi.org/10.1002/rcs.1968
    • This paper discusses autonomous robotic surgery with a particular focus on ethics, regulation and legal aspects (such as civil law, international law, tort law, liability, medical malpractice, privacy and product/device legislation, among other aspects). It explores responsibility for AI and autonomous surgical robots using the categories accountability, liability, and culpability, finding culpability as being the category with the least legal clarity.
  • Price, W. (2015). Black-box medicine. Harvard Journal of Law & Technology, 28(2), 419-468.
    • Written from a primarily legal and regulatory perspective, this article engages with the issue of “black box” technologies in precision medicine that are unable to provide a satisfactory explanation of the decisions that are outputted. The author discusses contemporary “Big Data” technology in medicine from practical and theoretical perspectives. He outlines several hurdles to development of this technology and a range of policy challenges including issues of incentives, privacy, regulation, and commercialization.
  • Price, W. N., Gerke, S., & Cohen, I. G. (2019). Potential liability for physicians using artificial intelligence. JAMA, 322(18), 1765-1766.
    • As AI applications enter clinical practice, physicians must grapple with issues of liability when determining how and when to follow (or not follow) the recommendations of these applications. In this article, legal scholars draw upon principles of tort law to discuss when a physician could be held liable for malpractice. The core argument of this paper, the need to analyze whether an AI recommendation is accurate and follows standard-of-care, has been synthesized by the authors in a tabular format.
  • Rampton, V., Mittelman, M., & Goldhahn, J. (2020). Implications of artificial intelligence for medical education. The Lancet Digital Health, 2(3), 111-112. https://doi.org/10.1016/S2589-7500(20)30023-6
    • As AI applications advance in medicine, there is a need to educate health professionals about these applications and their ethical implications. However, the path forward to do so remains unclear. In this article, the authors demonstrate how a popular educational framework for physicians, the Canadian Medical Education Directives for Specialists, can be modified to reflect the impact AI is having and will continue to have in medical practice and in healthcare more broadly.
  • Reddy, S., Allan, S., Coghlan, S., & Cooper, P. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association, 27(3), 491-497.
    • Concern has been expressed about the ethical and regulatory aspects of the application of AI in health care. While there has been extensive discussion about the ethics of AI in health care, there has been little dialogue as to how to practically address these concerns. This article proposes a governance model addresses the ethical and regulatory issues that arise out of the application of AI in health care.
  • Schiff, D., & Borenstein, J. (2019). How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA Journal of Ethics, 21(2), 138-145.
    • This article uses a hypothetical patient scenario to illustrate the difficulties faced when articulating the use of AI in patient care. They focus on: (1) informed consent, (2) patient perceptions of AI, and (3) liability when responsibility is distributed among “many hands”. For readers new to the area of medical decision-making, the case-based approach the authors have taken will be an engaging introduction to the most common pedagogy of medical education.
  • Smallman, M. (2019). Policies designed for drugs won’t work for AI. Nature, 567(7746), 7-7. https://doi.org/10.1038/d41586-019-00737-2*
    • This paper comments on the 2019 code of conduct for artificial-intelligence systems in health care by the UK government. The principles, laid out by the Department of Health and Social Care, aim to protect patient data and ensure safe data-driven technologies. The author argues however that the code fails to appreciate the potential to introduce and worsen inequities, and states the importance of developing a framework that considers and anticipates the social consequences of AI.
  • Tene, O., & Polonetsky, J. (2011). Privacy in the age of big data: A time for big decisions. Stanford Law Review Online, 64, 63-69.*
    • Big Data creates enormous value for the global economy, driving innovation, productivity, efficiency, and growth. This paper discusses privacy concerns related to big data applications, and suggests that in order to balance beneficial uses of data and the protection of individual privacy, policymakers must address some of the most fundamental concepts of privacy law, including the definition of “personally identifiable information,” the role of consent, and the principles of purpose limitation and data minimization.
  • Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature medicine, 25(1), 44-56.*
    • This review article provides an overview of the impact of AI in medicine at the levels of clinicians, health systems, and patients. It also reviews the current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications. The results reveal that over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but the potential impact on the patient–doctor relationship remains unknown.
  • Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLoS Medicine, 15(11). https://doi.org/10.1371/journal.pmed.1002689*
    • In this perspective, the authors outline a four-stage approach to promoting patient trust and provider adoption: (1) alignment with data protection requirements, (2) minimizing the effects of bias, (3) effective regulation, and (4) achieving transparency. Their approach is grounded by referencing the disparate views held on artificial intelligence in healthcare by the general adult population, medical students, and healthcare decision-makers ascertained through recently conducted surveys.
  • Vellido, A. (2019). Societal issues concerning the application of artificial intelligence in medicine. Kidney Diseases, 5(1), 11-17.
    • This paper reflects on a number of specific issues affecting the use of AI and ML in medicine, such as fairness, privacy and anonymity, explain-ability and interpretability, but also some broader societal issues, such as ethics and legislation. It additionally argues that AI models must be designed from a human-centered perspective, incorporating human-relevant requirements and constraints.
  • Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What this computer needs is a physician: Humanism and artificial intelligence. Jama, 319(1), 19-20.
    • This commentary highlights that while AI in medicine will lead to improved accuracy and efficiency, there is concern that the introduction of new tools may adversely impact physicians and lead to burnout, similar to electronic medical records. The authors state that we must aim for partnerships in which machines predict and perform tasks such as documentation, and physicians explains to patients and decides on action, bringing in the societal, clinical, and personal context. AI can enable physicians to spend more time caring for patients, actually improving the physician’s quality of work and the patient-physician relationship.
  • Wachter, R. M., & Cassel, C. K. (2020). Sharing health care data with digital giants: Overcoming obstacles and reaping benefits while protecting patients. JAMA, 323(6), 507-508.
    • In response to the steady stream of news updates around the entry and involvement of the major technology companies (e.g. Google, Apple, Amazon) into healthcare, this commentary proposes ideals for a collaborative path forward. It emphasizes transparency (especially around financial disclosures and conflicts of interest), direct consultation with patients/patient advocacy groups, and data security.
  • Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99.*
    • The ‘right to explanation’ in the EU’s Global Data Protection Regulation (GDPR) is seen as a mechanism to enhance the accountability and transparency of AI enabled decision-making. However, this paper shows that ambiguity and imprecise language in these regulations do not create well-defined rights and safeguards against automated decision-making. The paper proposes a number of legislative and policy steps to improve the transparency and accountability of automated decision-making.
  • van Wynsberghe, A. (2013). Designing robots for care: Care Centered Value-Sensitive Design. Science and Engineering Ethics, 19(2), 407–433. https://doi.org/10.1007/s11948-011-9343-6
    • This article discusses a value-sensitive design approach as applied to the creation of care robots created to fill a role analogous to that of a human nurse. After outlining foundational theoretical understandings of values, care ethics, and care practices, the author synthesizes a context-specific framework for considering these issues in robot design. She grounds this framework in the case study of already-implemented autonomous robots for lifting patients in the care home environment.
  • Yu, K. H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature biomedical engineering, 2(10), 719-731.*
    • With recent progress in digitized data acquisition, machine learning and computing infrastructure, AI applications are expanding into areas that were previously thought to be only the domain of human experts. This review article outlines recent breakthroughs in AI technologies and their biomedical applications, identifies the challenges for further progress in medical AI, and summarizes the economic, legal and social implications of AI in healthcare.

Global & Public Health

  • Davies, S. E. (2019). Artificial Intelligence in Global Health. Ethics & International Affairs33(2), 181-192.
    • Focusing largely on the topic of infectious disease, this paper explores the potential and limitations of artificial intelligence in the context of global health. The author contends that while AI may be effective in guiding responses to outbreak events, it presents substantial ethical risks related to exacerbating healthcare quality disparities, diverting funding from otherwise-necessary structural improvements, and enabling human rights abuses under the guise of containment. 
  • Ienca, M., & Vayena, E. (2020). On the responsible use of digital data to tackle the COVID‑19 pandemic. Nature Medicine. https://doi.org/10.1038/s41591-020-0832-5
    • This article argues that as vast amounts of digital data are being used to combat the COVID-19 pandemic, the uptake and maintenance of responsible data-collection and data-processing standards at a global scale is also vital. As data from mobile phones and internet-connected devices is being fed into pandemic prediction and surveillance efforts, the authors emphasize not only the duty to protect the public’s right to life, but also their rights to privacy and confidentiality. If governments and data trustees fail to do so, public mistrust could jeopardize the efficacy of even the most well-intentioned measures to reduce disease burden.
  • Kostkova, P. (2018). Disease surveillance data sharing for public health: The next ethical frontiers. Life Sciences, Society and Policy, 14(1). https://doi.org/10.1186/s40504-018-0078-x
    • This article identifies three core ethical challenges with the use of digital data in public health: (1) data sharing across risk assessment tools, (2) the use of population-level data without compromising privacy, and (3) regulating how technology companies manipulate user data. The article places special emphasis on legislation and regulatory frameworks from the European Union.
  • Luxtona, D. D. (2020). Ethical implications of conversational agents in global public health. Bulletin of the World Health Organization, 98(4), 285-287.
    • Conversational agents, colloquially known as “chatbots”, could help address disparities in access to mental health services or health services more generally in times of emergency (e.g. a natural disaster, pandemic, etc.). This article outlines core ethical issues of conversational agents to be cognizant of:  risk of bias, risk of harm, privacy, and inequitable access. It concludes by alluding to the World Health Organization’s potential role in this space through the creation of a “cooperative international working group” to make recommendations on the design and deployment of conversational agents and other artificially intelligent tools.
  • Mittelstadt, B., Benzler, J., Engelmann, L., Prainsack, B., & Vayena, E. (2018). Is there a duty to participate in digital epidemiology? Life Sciences, Society and Policy, 14, 9. https://doi.org/10.1186/s40504-018-0074-1
    • This article explores the notion of a duty to participate in digital epidemiology, acknowledging that there are different risks to participants present than in traditional biomedical research. The authors outline eight justificatory conditions for participation in digital epidemiology that should be reflected upon “on a case-by-case basis with due consideration of local interests and risks”. Notably, the authors demonstrate how these justificatory conditions can be used in-practice in three case studies involving infectious disease surveillance, HIV screening, and detecting notifiable diseases in livestock.
  • Paula, A. K., & Schaeferb, M. (2020). Safeguards for the use of artificial intelligence and machine learning in global health. Bulletin of the World Health Organization, 98(4), 282-284.
    • This article outlines challenges that low- and middle-income countries (LMICs) must overcome to develop and deploy artificial intelligence and machine learning innovations. It emphasizes that investments in these innovations by LMICs must be grounded in the realities of their health systems to enable success. The challenges outlined in this piece include: (1) improving the quality and use of data collected, (2) ensuring representation in these processes by marginalized groups, (3) establishing safeguards against bias, and (4) only investing in areas where health systems can operationalize innovations and deliver results.
  • Salathé, M. (2018). Digital epidemiology: what is it, and where is it going? Life sciences, Society and Policy, 14(1). https://doi.org/10.1186/s40504-017-0065-7
    • This seminal article provides a definition for the field of “digital epidemiology” and an outlook of how the field is poised to evolve in the coming years. For those new to the area, this article can serve as a succinct introduction before a more focused exploration into digital epidemiology’s unique ethical considerations.
  • Samerski, S. (2018). Individuals on alert: Digital epidemiology and the individualization of surveillance. Life Sciences, Society and Policy, 14(1). https://doi.org/10.1186/s40504-018-0076-z
    • This article provides a critical analysis of how digital epidemiology and the broader “eHealth” movement fundamentally change the notion of health into a constant state of surveillance. It argues that as predictive analytics dominates the discourse around population and individual‑level health, we are at risk of entering a state of “modus irrealis” or helpless paralysis due to events that may or may not transpire. The views expressed in this article stand in sharp contrast to digital health proponents such as Dr. Eric Topol, who argue that these advances promote autonomy and self-efficacy.
  • Samuela, G., & Derrick, G. (2020). Defining ethical standards for the application of digital tools to population health research. Bulletin of the World Health Organization, 98(4). 239-244.
    • This article provides a process for ethics governance to be used at higher educational institutions during ex-post reviews of population health AI research. The governance model proposed consists of two levels: (1) the mandated entry of research products into an open-science repository and (2) a sector-specific validation of the research processes and algorithms. Through this ex-post review, the authors believe that the potential for AI-systems to cause harm will be reduced before they are disseminated.
  • Smith, M. J., Axler, R., Bean, S., Rudzicz, F., & Shaw, J. (2020). Four equity considerations for the use of artificial intelligence in public health. Bulletin of the World Health Organization, 98(4) 290-292.
    • Equity, the absence of avoidable or remediable differences among groups, is a foundational concept in global and public health. In this article, the authors outline four equity considerations when designing and deploying artificial intelligence and public health contexts: (1) the digital divide, (2) algorithmic bias and values, (3) plurality of values across systems, and (4) fair decision-making procedures.
  • Vayena, E., & Madoff, L. (2019). Navigating the ethics of big data in public health. In A.C. Mastroianni, , J.P. Kahn& N.P. Kass (Eds.), The Oxford Handbook of Public Health Ethics (pp. 354-367). Oxford University Press.
    • This article provides an overview of the key ethical challenges for the use of big data in public health. They discuss issues such as: (1) privacy, (2) data control and sharing, (3) nonstate actors, (4) harm mitigation, (5) fair distribution of benefits, (6) civic empowerment, and (7) accountability. This article would serve as a useful introduction to those new to the field of public health as the authors ground their discussion around key areas of public health such as health promotion, surveillance, emergency preparedness and response, and comparative effectiveness research.
  • Wahl, B., Cossy-Gantner, A., Germann, S., & Schwalbe, N. R. (2018). Artificial intelligence (AI) and global health: How can AI contribute to health in resource-poor settings? BMJ Global Health, 3(4). http://dx.doi.org/10.1136/bmjgh-2018-000798
    • Much of the discourse around AI in medicine has focused on high-resource settings, which risks further propagating the digital divide between high- and low/middle-income countries. This review is one of the first to shift this discourse and do so in a solution-focused manner. The authors draw attention to several important enablers to AI in low-resource settings such as mobile health, open-source electronic medical record systems, and cloud computing.

Chapter 38. Ethics of AI in Law: Basic Questions (Harry Surden)⬆︎

  • Agrawal, A., Gans, J. S., & Goldfarb, A. (2019). Exploring the impact of artificial intelligence: Prediction versus judgment. Information Economics and Policy, 47, 1-6.
    • This article argues that because prediction allows riskier decisions to be taken, prediction has an impact on observed productivity although it could also increase the variance of outcomes. However, the authors also demonstrate that better prediction may result in different judgements depending on the context and therefore not all human judgment will be a complement to AI. Nonetheless, the authors argue that humans will delegate some decisions to machines even when the decision would be superior with human input.
  • Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction machines: The simple economics of artificial intelligence. Harvard Business Review Press.*
    • In this book, the authors show how the predictive power of AI can be used in the face of uncertainty, to increase productivity, and to develop strategies. The authors employ an economic framework to explain the impacts of this adoption of AI.
  • Angwin J., Larson J. (2016). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing*
    • In this article, the authors cite anecdotal and sentencing patterns to argue that algorithms tasked to predict the potential for future criminal activities of a particular person are biased along racial lines.
  • Barocas, S., & Andrew, D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671.  https://doi.org/10.15779/Z38BG31*
    • This article examines concerns that flawed or biased data can interfere with the supposed ability of algorithmic methods to eliminate human biases from the decision-making process, through the lens of American anti-discrimination law—more particularly, through Title VII’s prohibition of discrimination in employment. The authors argue that finding a solution to this issue will require more than mitigation of prejudice and bias; it will require a wholesale reexamination of the meanings of “discrimination ” and “fairness”.
  • Calo, R. (2018). Artificial intelligence policy: A primer and roadmap. University of Bologna Law Review, 3(2), 180-218.*
    • The essay aims to help policymakers, investors, scholars, and students understand the contemporary policy environment around artificial intelligence and the key challenges it presents. It aims to provide a basic roadmap of the issues that surround the implementation of AI in the current environment.
  • Citron  D.K. (2008), Technological due 0rocess. Washington University Law Review, 8(5), 1249-1313.*
    • This article aims to demonstrate how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework for technological due process to ensure that it preserves transparency, accountability, and accuracy of rules in automated decision-making systems.
  • Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 797-806).
    • This paper argues that the objective of algorithmic fairness should be reframed as an optimization of maximization of public safety while satisfying formal fairness constraints designed to reduce racial disparities.
  • Kaminski, M. E. (2019). The right to explanation, explained. Berkeley Technology Law Journal, 34(1), 189-218. https://doi.org/10.15779/Z38TD9N83H*
    • This article explores how the EU’s General Data Protection Regulation (GDPR) establishes algorithmic accountability: laws governing decision-making by complex algorithms or AI. It argues that the GDPR provisions on algorithmic accountability, in addition to including a right to explanation (a right to information about individual decisions made by algorithms), could be broader, stronger, and deeper than the preceding requirements of the Data Protection Directive.
  • Kleinberg, J. (2018). Inherent trade-offs in algorithmic fairness. In Abstracts of the 2018 ACM International Conference on Measurement and Modelling of Computer Systems (pp. 40-40). https://doi.org/10.1145/3219617.3219634*
    • This article explores the way classifications done by algorithms create tension between competing notions of what it means for such a classification to be fair to different groups. The authors then present several of the key fairness conditions and the inherent trade-offs between these conditions.
  • Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807.
    • The article explores how algorithmic classification involves tension between competing notions of what it means for a probabilistic classification to be fair to different groups. After formalizing three fairness conditions that lie at the heart of these debates, the authors show that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Thus, the article argues that key notions of fairness are incompatible with each other, and hence seeks to provide a framework for thinking about the trade-offs between them.
  • Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. University of Pennsylvania Law Review, 165(3),633-706.*
    • This article argues that transparency will not solve the problems of automated decision systems such as returning potentially incorrect, unjustified, or unfair results. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process.
  • Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835-850.
    • This article argues that developers have a responsibility for their algorithms later in use, and that firms should be responsible not only for the value-laden-ness of an algorithm but also for designing who-does-what within the algorithmic decision. Thus, firms developing algorithms are accountable for designing how large a role individuals will be permitted to take in the subsequent algorithmic decision.
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.*
    • This paper argues that researchers and practitioners who seek to make their algorithms more understandable should utilize research done in the fields of philosophy, psychology, and cognitive science to understand how people define, generate, select, evaluate, and present explanations, and account for how people employ certain cognitive biases and social expectations towards the explanation process.
  • Mulligan D. and Bamberger, K. (2018). Saving governance-by-design. California Law Review, 106(3), 697-784.*
    • This article argues that “governance-by-design”—the purposeful effort to use technology to embed values—is quickly becoming a significant influence on policy making. Furthermore, the existing regulatory system is fundamentally ill-equipped to prevent technological based governance from subverting public governance.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
    • This book explores and analyses the results generated by Google search algorithms and argues that search algorithms are able to reflect racist biases as the algorithms created for such search engines reflect the biases and values of the people that created them.
  • Pasquale F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.*
    • In this book, Pasquale explores the power of ‘hidden algorithms’. He argues that such algorithms permit self-serving and reckless behavior and how powerful interests abuse the secrecy of these algorithms for profit. Thus, transparency must be demanded of firms, such that they accept as much accountability as they impose on others.
  • Richards, N. M. (2012). The dangers of surveillance. Harvard Law Review, 126(7), 1934-1965.*
    • This article aims to explain and highlight the harms of government surveillance. The author uses work from multiple disciplines such as law, history, literature, and the work of scholars in the emerging interdisciplinary field of “surveillance studies,” to define what those harms are and why they matter.
  • Selbst A. D. and Barocas S. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085-1139.*
    • In this article, the authors aim to show what makes decisions made by algorithms seem inexplicable, by examining what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation.
  • Speicher, T., Heidari, H., Grgic-Hlaca, N., Gummadi, K. P., Singla, A., Weller, A., & Zafar, M. B. (2018, July). A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2239-2248).
    • This paper aims to explore how to determine what makes one algorithm more unfair than another. The authors aim to use existing inequality indices from economics to measure how unequally the outcomes of an algorithm benefit different individuals or groups in a population.
  • Surden, H. (2019). Artificial intelligence and law: An overview. Georgia State University Law Review, 35(4), 1305-1337.*
    • This paper aims to provide a concrete survey of the current applications and uses of AI within the context of the law, without straying into discussions about AI and law that are futurist in nature. It aims to highlight a realistic view that is rooted in the actual capabilities of AI technology as it currently stands.
  • Susskind, R. E., & Susskind, D. (2015). The future of the professions: How technology will transform the work of human experts. Oxford University Press.
    • The authors argue that our current professions are antiquated, opaque and no longer affordable, and that the expertise of their best is enjoyed only by a few, and thus present an exploration into the ethical issues that arise when machines can out-perform human beings at most tasks. The authors explore how technological change will affect prospects for employment, who should own and control online expertise, and what tasks should be reserved exclusively for people.

Chapter 39. Beyond Bias: “Ethical AI” in Criminal Law (Chelsea Barabas)⬆︎

  • Benjamin, R. (2016) Catching our breath: Critical race STS and the carceral imagination. Engaging Science, Technology, and Society 2, 145-156.*
    • This article uses science and technology studies along with critical race theory to examine the proliferation and intensification of carceral approaches to governing human life. The authors argue in favour of an expanded understanding of “the carceral” that extends beyond the domain of policing to include forms of containment that make innovation possible in the contexts of health and medicine, education and employment, border policies and virtual realities.
  • Brown, Michelle, and Judah Schept. (2017). New abolition, criminology and a critical carceral studies. Punishment & Society, 19(4), 440-462.*
    • This article argues that criminology has been slow to open up a conversation about decarceration and abolition. In this article, the authors advocate for and discuss the contours of critical carceral studies, a growing interdisciplinary movement for engaged scholarly and activist production against the carceral state.
  • Bosworth, M. (2019). Affect and authority in immigration detention. Punishment & Society, 21(5), 542-559.
    • This article considers the relationship between authority and affect by drawing on a long-term research project across a number of British Immigration Removal Centers (IRCs). This article argues that staff authority rests on an abrogation of their self rather than engagement with the other. This is in contrast to much criminological literature on the prison, which advances a liberal political account in which power is constantly negotiated and based on mutual recognition.
  • Corbett-Davies, Sam, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM *SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 797-806).*
    • The article aims to reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. The authors show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds.
  • Elliott, Delbert S. (1995). Lies, damn lies, and arrest statistics. Center for the Study and Prevention of Violence.*
    • This paper argues that most research on the parameters of a criminal career that utilizes arrest data to estimate the underlying behavioral dynamics of criminal activity is flawed. The author argues that this generalization of findings from analyses of arrest records to the underlying patterns and dynamics of criminal behavior and characteristics of offenders in the general population are likely to lead to incorrect conclusions, ineffective policies and practices and ultimately undermine our efforts to understand, prevent and control criminal behavior.
  • Ferguson, Andrew Guthrie. (2016) Policing predictive policing. Washington University Law Review, 94(5), 1109-1189.*
    • This article examines predictive policing’s evolution and aims to provide a practical and theoretical critique of this new policing strategy that promises to prevent crime before it happens. Building on insights from scholars who have addressed the rise of risk assessment throughout the criminal justice system, this article provides an analytical framework to police new predictive technologies
  • Harcourt, Bernard E. (2008). Against prediction: Profiling, policing, and punishing in an actuarial age. University of Chicago Press.*
    • In this book, the author argues prediction tools increase the overall amount of crime in society, depending on the relative responsiveness of the profiled populations to heightened security. The author proposes a turn to randomization in punishment and policing, against prediction.
  • Huq, A. Z. (2018). Racial equity in algorithmic criminal justice. Duke Law Journal, 68(6),1043-1134.
    • This article considers the interaction of algorithmic tools for predicting violence and criminality that are increasingly deployed in policing, bail, and sentencing, with the enduring racial dimensions of the criminal justice system. The author then argues that a criminal justice algorithm should be evaluated in terms of its long-term, dynamic effects on racial stratification.
  • Jefferson, B. J. (2017). Digitize and punish: Computerized crime mapping and racialized carceral power in Chicago. Environment and Planning D: Society and Space, 35(5), 775-796.
    • This article aims to put critical geographic information systems theory into discussion with critical ethnic studies and thus argue that CLEARmap, the Chicago police’s digital mapping application, does not passively “read” urban space, but provides ostensibly scientific ways of reading and policing negatively racialized fractions of surplus labor in ways that reproduces, and in some instances extends the reach of carceral power.
  • Kleinberg, Jon, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan (2018). Algorithmic fairness. AEA Papers and Proceedings, 108, 22-27.*
    • This paper proposes that concerns that algorithms may discriminate against certain groups that have led to numerous efforts to ‘blind’ the algorithm to race are misleading and may do harm. Thus, the authors argue that equity preferences can change how the estimated prediction function is used (e.g., different threshold for different groups) but the function itself should not change.
  • Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2018). Discrimination in the age of algorithms. Journal of Legal Analysis. https://doi.org/10.3386/w25548
    • This paper argues that the use of algorithms will make it possible to more easily examine and interrogate the entire legal process to identify whether anyone has actually discriminated – an action forbidden by law, thereby making it far easier to know whether discrimination has occurred.
  • Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. https://arxiv.org/abs/1609.05807
    • The article explores how algorithmic classification involves tension between competing notions of what it means for a probabilistic classification to be fair to different groups. After formalizing three fairness conditions that lie at the heart of these debates, the authors show that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Thus, the article argues that key notions of fairness are incompatible with each other, and hence seeks to provide a framework for thinking about the trade-offs between them.
  • Lyon, D. (2014). Surveillance, Snowden, and big data: Capacities, consequences, critique. Big Data & Society, 1(2), https://doi.org/10.1177%2F2053951714541861.
    • This article explores the extent the Snowden disclosures indicated that Big Data practices are becoming increasingly important to surveillance, and if Big Data is gaining ground in this area, then how this indicates changes in the politics and practices of surveillance. The author analyses the capacities of Big Data and their social-political consequences and then comments on the kinds of critique that may be appropriate for assessing and responding to these developments.
  • Mayson, S. G. (2018). Bias in, bias out. Yale Law Journal, 128(8), 2218-2300.
    • This paper argues strategies currently put in place to mitigation algorithmic discrimination are at best superficial and at worst counterproductive because the source of racial inequality in risk assessment lies neither in the input data, nor in a particular algorithm, nor in algorithmic methodology per se. The problem is the nature of prediction itself, since all prediction looks to the past to make guesses about future events. In a racially stratified world, any method of prediction will project the inequalities of the past into the future.
  • Muhammad, K. G. (2008). The condemnation of blackness. Harvard University Press.*
    • This article reveals the influence ideas such as deeply embedded notions of black people as a dangerous race of criminals by explicit contrast to working-class whites and European immigrants, the idea of black criminality, and African Americans’ own ideas about race and crime have had on urban development and social policies.
  • Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., & Weinberger, K. Q. (2017). On fairness and calibration. Advances in Neural Information Processing Systems, 30, 5680-5689.
    • This article investigates the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. The article argues that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and shows that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier.
  • Rudolph, S., Sriprakash, A., & Gerrard, J. (2018). Knowledge and racial violence: The shine and shadow of ‘powerful knowledge’. Ethics and Education, 13(1), 22-38.
    • This paper argues that ‘powerful knowledge’ seems to focus on the progressive impulse of modernity while overlooking the ruination of colonial racism. Powerful knowledge is disciplinary knowledge produced and refined through a process of ‘specialization’ that usually occurs in universities. Thus, the authors argue curriculum knowledge must more fully address the hegemonic relations of disciplinary specialization and its historical connections to colonial-modernity.
  • Selbst, A. D. (2017). Disparate impact in big data policing. Georgia. Law Review, 52(1), 109-195.
    • This paper argues that the degree to which predictive policing systems incur discriminatory results is unclear to the public and to the police themselves, largely because there is no incentive in place for a department focused solely on “crime control” to spend resources asking the question. Thus, the authors propose a new regulatory proposal centered on “algorithmic impact statements”, to mitigate the issues created by predictive systems.
  • Sriprakash, A., Tikly, L., & Walker, S. (2019). The erasures of racism in education and international development: Re-reading the ‘global learning crisis’. Compare: A Journal of Comparative and International Education. https://doi.org/10.1080/03057925.2018.1559040
    • This paper argues the field of education and international development continues to fail to substantively engage with the production and effects of racial domination across its domains of research, policy and practice. The authors present a re-reading of the ‘global learning crisis’ to demonstrate how the framing of the ‘crisis’ and the responses it engenders and legitimizes operate as a ‘racial project’.
  • Stevenson, Megan (2018). Assessing risk assessment in action. Minnesota Law Review 103(1), 303-384.*
    • This article documents the impacts of risk assessment in practice, and argues that risk assessment had no effect on racial disparities in pretrial detention once differing regional trends were accounted for. This is shown using data from more than one million criminal cases, highlighting that a 2011 law making risk assessment a mandatory part of the bail decision led to a significant change in bail setting practice, but only a small increase in pretrial release.

Chapter 40. “Fair Notice” in the Age of AI (Kiel Brennan-Marquez)⬆︎

  • Brennan-Marquez, K. (2017). Plausible cause: Explanatory standards in the age of powerful machines. Vanderbilt Law Review, 70, 1249.*
    • This article argues that statistical accuracy, though important, is not the crux of explanatory standards. The value of human judges lies in their practiced wisdom rather than analytic power.  The author replies to a common argument against replacing judges, that claims intelligent machines are not (yet) intelligent enough to take up the mantle, by highlighting that powerful intelligent algorithms currently exist, and furthermore judging is not about intelligence, it’s about prudence.
  • Brennan-Marquez, K. (2019). Extremely broad laws. Arizona Law Review, 61, 641.*
    • This article argues that extremely broad laws offend due process because they afford state officials practically boundless justification to interfere with private life. Thus, the article explores how courts might tackle the breadth problem in practice—and ultimately suggests that judges should be empowered to hold statutes “void-for-breadth.”
  • Citron, D. K. (2007). Technological due process. Washington University Law Review, 85, 1249.*
    • This article aims to demonstrate how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework for technological due process to ensure that it preserves transparency, accountability, and accuracy of rules in automated decision-making systems.
  • Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89, 1.*
    • This article argues that though automated scoring may be pervasive and consequential, it is also opaque and lacking oversight. Thus, automated scoring must be implemented alongside protections, such as testing scoring systems to ensure their fairness and accuracy, otherwise systems could launder biased and arbitrary data into powerfully stigmatizing scores.
  • Cohen, J. E. (2012). Configuring the networked self: Law, code, and the play of everyday practice. Yale University Press.
    • This book argues that legal and technical rules governing flows of information are out of balance, as flows of cultural and technical information are overly restricted, while flows of personal information often are not restricted at all.
  • Crawford, K., & Schultz, J. (2014). Big data and due process: Toward a framework to redress predictive privacy harms. Boston College Law Review, 55, 93.*
    • This article highlights how Big Data has vastly increased the scope of personally identifiable information and how poor execution of Big Data methodology may create additional harms by rendering inaccurate profiles that nonetheless impact an individual’s life and livelihood. Thus, the article argues for a mitigation of predictive privacy harms through a right to procedural data due process.
  • Delacroix, S. (2018). Computer systems fit for the legal profession? Legal Ethics, 21(2), 119-135.
    • This article argues against the conception that wholesale automation is both legitimate and desirable, provided it improves the quality and accessibility of legal services by presenting the claim that this comes at the cost of moral equality. In response, the authors propose designing systems that better enable legal professionals to live up to their specific responsibility by ensuring that they are profession specific, in contrast to generalized automation.
  • Ferguson, A. G. (2019). The rise of big data policing: Surveillance, race, and the future of law enforcement. NYU Press.
    • This book discusses the consequences of big data and algorithm-driven policing and its impact on law enforcement. It then explores how technology will change law enforcement and its potential threat to the security, privacy, and constitutional rights of citizens.
  • Froomkin, A. M., Kerr, I., & Pineau, J. (2019). When AIs outperform doctors: Confronting the challenges of a tort-induced over-reliance on machine learning. Arizona  Law Review, 61, 33.
    • This article argues that currently a combination of human and machine may be more effective than either alone in medical diagnoses but in time machines will improve and become more effective, thus creating overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Thus, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings
  • Grimmelmann, J., & Westreich, D. (2017). Incomprehensible discrimination. California Law Review Online 7.*
    • This article explores and replies to Solon Barocas and Andrew Selbst’s argument in Big Data’s Disparate Impact concerning the use of algorithmically derived models that are both predictive of a legitimate goal and have a disparate impact on some individuals. The authors agree that these models have a potential impact on antidiscrimination law, but argue for a more optimistic stance: that the law already has the doctrinal tools it needs to deal appropriately with cases of this sort.
  • Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14-29.
    • This paper explores and expands upon current thinking about algorithms and considers how best to research them in practice. Concepts such as the importance of algorithms in shaping social and economic life, how they are embedded in wider socio-technical assemblages, and challenges that arise when researching algorithms are explored.
  • Manes, J. (2017). Secret law. Georgetown Law Journal, 106, 803.*
    • This article aims to unpack the underlying normative principles that both militate against secret law and motivate its widespread use. By investigating the tradeoff between democratic accountability, individual liberty, separation of powers, and pragmatic national security purposes created by secret law, this article proposes a systematic rubric for evaluating particular instances of secret law.
  • Manes, J. (2019). Secrecy & evasion in police surveillance technology. Berkeley Technology Law Journal, 34, 503.
    • This article examines the anti-circumvention argument for secrecy which claims that disclosure of police technologies would allow criminals to evade the law. This article then argues that this argument permits far more secrecy than it can justify, and finally proposes specific reforms to circumscribe laws that currently authorize excessive secrecy in the name of preventing evasion.
  • Markovic, M. (2019). Rise of the robot lawyers. Arizona Law Review, 61, 325.
    • This article argues against the claim that lawyers will be displaced by artificial intelligence on both empirical and normative grounds. This argument is developed on the following grounds: first, artificial intelligence cannot handle the abstract nature of legal tasks, second, the legal profession has grown and benefited from technology, rather than being challenged by it. Finally,  even if large-scale automation of legal work were possible, core societal values would counsel against it.
  • Metcalf, J., & Crawford, K. (2016). Where are human subjects in big data research? The emerging ethics divide. Big Data & Society, 3(1), 2053951716650211.
    • Against the background of a proposal for major revisions to the Common Rule—the primary regulation governing human-subjects research in the USA—being under consideration for the first time in decades, this article argues that data science should be understood as continuous with social sciences in regard to the stringency of the ethical regulations that govern it since the potential harms of data science research are unpredictable.
  • Pasquale F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.*
    • In this book, Pasquale explores the power of ‘hidden algorithms’. He argues that such algorithms permit self-serving and reckless behavior and how powerful interests abuse the secrecy of these algorithms for profit. Thus, transparency must be demanded of firms, such that they accept as much accountability as they impose on others.
  • Pasquale, F. (2019). A rule of persons, not machines: The limits of legal automation. George Washington Law Review, 87, 1.*
    • This article argues that legal automation cannot replace human legal practice as it can elude or exclude important human values, necessary improvisations, and irreducibly deliberative governance – particularly, software cannot replicate narratively intelligible communication from persons and for persons. Thus in order to preserve accountability and a humane legal order, persons, not machines, are required in the legal profession.
  • Re, R. M., & Solow-Niederman, A. (2019). Developing artificially intelligent justice. Stanford Technology Law Review, Forthcoming.
    • This article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large, particularly in areas where “equitable justice,” or discretionary moral judgment is most significantly exercised. In contrast, AI adjudication would promote “codified justice” which promotes standardization above discretion.
  • Selbst, A. D., & Barocas, S. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87, 1085.*
    • In this article, the authors aim to show what makes decisions made by algorithms seem inexplicable, by examining what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation.
  • Solove, D. J. (2011). Nothing to hide: The false tradeoff between privacy and security. Yale University Press.
    • In this book, Solove argues against the claim that society has a duty to sacrifice privacy for security by exposing the fallacies and flaws of these claims, then arguing that protecting privacy isn’t fatal to security measures; it merely involves adequate oversight and regulation.

Chapter 41. AI and Migration Management (Petra Molnar)⬆︎

  • Austin, L. (2018, July 9). We must not treat data like a natural resource. The Globe and Mail. https://www.theglobeandmail.com/opinion/article-we-must-not-treat-data-like-a-natural-resource/
    • In this opinion piece, Austin argues that framing data transformation as a balance between economic innovation and privacy provides a narrow framework for understanding what is at stake. Not only are these values not necessarily in tension, but the focus on privacy and ownership language fails to capture implications for the public sphere, human rights, and social interests. Austin proposes a better framing – one that goes beyond data as an extractable resource and recognizes data as a new informational dimension to individual and community life.
  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671-732.*
    • This essay examines data bias concerns through the lens of American discrimination law. In light of algorithms frequently inheriting prejudices of prior decision makers and difficulties identifying the source of the bias of explaining the bias to a court, the author looks to disparate impact doctrine in laws surrounding discrimination in the workplace to identify potential remedies for the victims of data mining. The author underscores that finding a solution to Big Data’s disparate impact requires re-examining the meanings of “discrimination” and “fairness” in addition to efforts to eliminate prejudice and bias.
  • Benvenisti, E. (2018). Upholding democracy amid the challenges of new technology: What role for the law of global governance? European Journal of International Law, 29(1), 9-82.
    • This article describes how law has evolved with the growing need for accountability of global governance bodies and analyzes why legal tools are ill-equipped to address new modalities of governance based on new information and communication technologies and automated decision making using raw data. Benevisti argues that the law of global governance extends beyond ensuring accountability of global governance bodies and serves to protect human dignity and the viability of the democratic state. 
  • Chambers, S. N., Boyce, G. A., Launius, S., & Dinsmore, A. (2019). Mortality, surveillance and the tertiary “funnel effect” on the US-Mexico border: A geospatial modeling of the geography of deterrence. Journal of Borderlands Studies. https://doi.org/10.1080/08865655.2019.1570861
    • This study applies a geospatial analysis of landscape and human variables within a highly trafficked corridor of the Arizona/Sonora border to analyze the impacts of the U.S. Border Patrol’s SBInlet surveillance system on migrant routes. The study provides geographic analysis of the connection between death locations and deployment of border surveillance infrastructure, and the authors situate these findings in the context of ongoing U.S. efforts to expand and concentrate border surveillance and enforcement infrastructure for deterrence purposes. 
  • Carens, J. (2013). The ethics of immigration. Oxford University Press.
    • This book explores how contemporary immigration issues present practical problems for western democracies while challenging the ways in which concepts of citizenship and belonging, rights and responsibilities, and freedom and equality are understood. The author uses the moral framework of liberal democracies to propose that a commitment to open borders is necessary to uphold values of freedom and equality. 
  • Crisp, J. (2018). Beware the notion that better data lead to better outcomes for refugees and migrants. Chatham House.
    • This article explores the implications of the high level of interest in refugee and migration data collection, analysis, and dissemination among national governments and international organizations and challenges the notion that more data leads to better government migration management policies. The author stresses that while the new emphasis on data may produce insights into migrant needs and movement patterns, socio-economic conditions, and employability, important challenges arise in the form of confidentiality and security issues and abusive data use. The author warns against the adoption of technocratic and apolitical approaches to humanitarian aid in which data collection supersedes the imperative to interact with and improve the lives of refugees and migrants. 
  • Csernatoni, R. (2018). Constructing the EU’s high-tech borders: FRONTEX and dual-use drones for border management. European Security, 27(2), 175-200.
    • This article examines the EU’s strategy to develop technologies such as aerial surveillance drones for border management and security. The author contends that the normalization of drone use at the border-zone embodies a host of ethical and legal implications and falls within a broader European securitized approach to migration. The article explores how this “dronisation” is presented as a technical panacea for the consequences of failed irregular migration management policies and creates further opportunities for exploitation of vulnerable migrants.
  • Farraj, A. (2010). Refugees and the biometric future: The impact of biometrics on refugees and asylum seekers. Columbia Human Rights Law Review, 42(3), 891-941.
    • This paper explores the impacts of biometric technologies on refugees and asylum seekers. The paper surveys the various ways in which biometrics are used and explores privacy implications, comparing standards and protections laid out by U.S. and EU law. The author underscores the importance of utilizing biometrics to protect refugees and asylum seekers and that their well-being is furthered by the collection, storage, and utilization of their biometric information.
  • Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, M., Hofstetter, Y., van den Hoven, J., Zicari, R.V., & Zwitter, A. (2019). Will democracy survive big data and artificial intelligence? In D. Helbing (Ed.), Towards digital enlightenment (pp. 73-98). Springer.*
    • This article examines how the “data revolution” and widespread automation of data analysis threaten to undermine core democratic values if basic rights of citizens are not protected. The authors argue that Big Data, automation, and nudging should not be used to incapacitate citizens or control behaviors, and propose various fundamental principles derived from democratic societies that should guide the use of Big Data and AI.
  • Johns, F. (2017). Data, detection, and the redistribution of the sensible in international law. American Journal of International Law, 111(1), 57-103.
    • This article explores how technology changes and mediates the jurisdiction of international law and international institutions such as the UNHCR. The author surveys changes in international legal and institutional work to highlight the distributive implications of automation in shaping allocations of power, competence, and capital. The author claims that technologically advanced modes of data gathering and analysis and the introduction of machine learning results in new configurations of inequality and international institutional work that fall outside the scope of existing international legal thought, doctrine, and practice.
  • Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611-627.
    • This article provides an overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making. The authors underscore the cruciality and urgency to engage multi-disciplinary teams of researchers, policymakers, practitioners, and citizens to co-develop and evaluate algorithmic decision-making processes designed to maximize fairness and transparency to support democracy and development. 
  • Liu, H. Y., & Zawieska, K. (2017). A New Human Rights Regime to Address Robotics and Artificial Intelligence. In 2017 Proceedings of the 20th International Legal Informatics Symposium (pp. 179-184). Oerterreichische Computer Gesellschaft.*
    • This paper examines how a declining human ability to control technology suggests a declining power differential and possibility of inverse power relations between humans and AI. The authors explore how this potential inversion of power impacts the protection of fundamental human rights, and propose that the opacity of potentially harmful AI systems risks eroding rights-based responsibility and accountability mechanisms.
  • Magnet, S. (2011). When biometrics fail: Gender, race, and the technology of identity. Duke University Press.
    • This book analyzes the state use of biometrics to control and classify vulnerable marginalized populations and track individuals beyond national territorial boundaries. The author explores cases of failed biometrics to demonstrate how these technologies work differently, and fail more often, on women, racialized populations, and people with disabilities, and stresses that these failures result from biometric technologies falsely assuming that human bodies are universal and unchanging over time.
  • McGregor, L., Murray, D., & Ng, V. International Human Rights as a Framework for Algorithmic Accountability (2019). International and Comparative Law Quarterly, 68(2), 309-343.
    • This article seeks to explore the potential human rights harms caused by the use of algorithms in decision-making. The authors analyze how international human rights law provides a framework for shared understanding and means of assessing harm while dealing with multiple actors/forms of responsibility and applies across the full algorithmic life cycle from conception to deployment.
  • Molnar, P., & Gill, L. (2018). Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration and refugee system. University of Toronto’s International Human Rights Program (IHRP) at the Faculty of Law and the Citizen Lab at the Munk School of Global Affairs and Public Policy, with support from the IT3 Lab at the University of Toronto. https://it3.utoronto.ca/wp-content/uploads/2018/10/20180926-IHRP-Automated-Systems-Report-Web.pdf
    • This report highlights the human rights implications of using algorithmic and automated technologies for administrative decision-making in Canada’s immigration and refugee system. Molnar and Gill survey current and proposed uses of automated decision-making, illustrate how decisions may be affected by new technologies, and develop a human rights analysis from domestic and international perspectives. The report outlines several policy challenges related to the adoption of these technologies and presents a series of policy recommendations for the federal government. 
  • Maas, M. M. (2019). International law does not compute: Artificial intelligence and the development, displacement or destruction of the global legal order. Melbourne Journal of International Law, 20, 29-57.*
    • This paper draws upon techno-historical scholarship to assess the relationship between new technologies and international law. The paper aims to demonstrate how new technologies change legal situations both directly, by creating new entities and enabling new behavior, as well as indirectly by shifting incentives or values. The author proposes that technically and politically disruptive features of AI threaten to destroy key areas of international law that suggests a risk of obsolescence of distinct international legal regimes.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
    • Noble’s book challenges the notion that search engines like Google are value-neutral. Noble reveals how the combination of private interests in promoting certain sites and monopoly of Internet search engines leads to a biased set of engines that are embedded with “data discrimination” in ways that privilege whiteness while marginalizing people of color.
  • Raymond, N., Al Achkar, Z., Verhulst, S., Berens, J., Barajas, L., & Easton, M. (2016). Building data responsibility into humanitarian action. OCHA Policy and Studies Series. https://ssrn.com/abstract=3141479
    • This paper explores the risks and challenges for collecting, analyzing, aggregating, sharing, and using data for humanitarian projects including handling sensitive data and bias and discrimination. By drawing on case studies of data-driven initiatives across the globe, the authors identify the critical issues humanitarians face as they use data in operations, and propose an initial framework for data responsibility.
  • Staton, B. (2016). Eye spy: Biometric aid system trials in Jordan. The New Humanitarian. https://www.thenewhumanitarian.org/analysis/2016/05/18/eye-spy-biometric-aid-system-trials-jordan
    • Staton’s article explores the use of biometric iris scanners in Syrian refugee camps in Azraq, Jordan. Through interviews with the technology’s developers, users, and advocacy groups, Staton outlines the proposed practical and security benefits of the technology as well as refugees’ concerns surrounding privacy, possibility of abuses and data error, and effects on health and wellbeing. Staton’s article acknowledges the rapidly growing adoption of technology in humanitarian aid and places biometric iris scanning technology in broader debates surrounding responsible data use and protecting vulnerable populations from potential harm.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.
    • Zuboff’s book explains and details how the phenomenon of surveillance capitalism threatens to modify human behavior for profit by producing new forms of economic oppression where wealth and power are accumulated in behavioral futures markets and behavioral predictions are bought and sold. Zuboff stresses that the ubiquity of digital architecture creates a controlled hive of total connection that promises certainty and economic gain at the expense of democracy and freedom.

Chapter 42. Robot Teaching, Pedagogy, And Policy (Elana Zeide)⬆︎

  • Bradbury, A., & Roberts-Holmes, G. (2017). The datafication of primary and early years education: Playing with numbers. Routledge.
    • This book analyzes the trend of increased use of data in early childhood education. Using case studies and sociological and post-foundational frameworks, Bradbury and Roberts-Holmes argue that new teacher and student subjectivities are created.
  • Bradbury, A. (2019). Datafied at four: The role of data in the ‘schoolification’ of early childhood education in England. Learning, Media and Technology44(1), 7-21. https://doi.org/10.1080/17439884.2018.1511577
    • This article looks at the impact of datafication on children from birth to age five in England, arguing that nurseries and schools are subjected to demands from data, creating new subjectivities.
  • Edwards, Richard. (2015). Software and the hidden curriculum in digital education. Pedagogy, Culture & Society, 23(2), 265–79. https://doi.org/10.1080/14681366.2014.977809*
    • This article challenges the positioning of emerging technologies as mere tools to enhance teaching and learning, by highlighting the ways in which these technologies shape curriculum and limit modes of interaction between teachers and students.
  • Fenwick, T., & Edwards, R. (2016). Exploring the impact of digital technologies on professional responsibilities and education. European Educational Research Journal15(1), 117-131. https://doi.org/10.1177%2F1474904115608387
    • This article examines how new digital technologies are impacting the relationship between professionals and their clients/users/students. As a result of this, new forms of accountability and responsibility have emerged.
  • Gulson, K. N., & Sellar, S. (2019). Emerging data infrastructures and the new topologies of education policy. Environment and Planning D: Society and Space37(2), 350-366. https://doi.org/10.1177%2F0263775818813144
    • This article argues that datafication in educational policy is creating new topologies, changing the relations of power in educational environments.
  • Hartong, S., & Förschler, A. (2019). Opening the black box of data-based school monitoring: Data infrastructures, flows and practices in state education agencies. Big Data & Society6(1). https://doi.org/10.1177%2F2053951719853311
    • This article examines digital data infrastructures in state education agencies, considering the role of school monitoring. They argue that the rise of digital technologies create new capabilities and powers, and suggest that teachers should be given more information on these tools.
  • Herold, B. & Molnar, M. (2018, November 6). Are companies overselling personalized learning? Education Week. https://www.edweek.org/ew/articles/2018/11/07/arecompanies-overselling-personalized-learning.html.*
    • This article critiques the use of the term “personalized learning” as it has no set definition and can refer to a variety of pedagogical strategies. Rather, the term has been used as a marketing tool for companies looking to sell their products to educators.
  • Herold, B. (2018, November 7). What does personalized learning mean? Whatever people want it to. Education Week. https://www.edweek.org/ew/articles/2018/11/07/what-does-personalized-learning-mean-whatever-people.html.*
    • This article critiques the variety of definitions applied to the term personalized learning, arguing that loose definitions can result in incoherent policy and ineffective educational outcomes.
  • Landri, P. (2018). Digital governance of education: Technology, standards and Europeanization of education. Bloomsbury Publishing.
    • This book explores how datafication impacts the experience of education. Landri argues that this datafication is related to the trend of standardized education.
  • Lindh, M., & Nolin, J. (2016). Information we collect: Surveillance and privacy in the implementation of Google apps for education. European Educational Research Journal15(6), 644-663. https://doi.org/10.1177%2F1474904116654917
    • This study argues that Google’s business model for online marketing is embedded in its educational tools, Google Apps for Education (GAFE).
  • Murphy, R. F., (2019). Artificial intelligence applications to support K-12 teacher and teaching: A review of promising applications, challenges, and risks. RAND Corporation. https://www.rand.org/pubs/perspectives/PE315.html*
    • The author explores how AI can be used to support K-12 teachers by assisting them with tasks rather than outright replacing them. Examined systems include intelligent tutoring, automated essay grading, and early warning protocols. Technical challenges are discussed.
  • Office of Education Technology, U.S. Department of Education. (2017, January 18). What is personalized learning? Personalizing the learning experience: insights from future ready schools. Medium. https://medium.com/personalizing-the-learning-experience-insights/what-is-personalized-learning-bc874799b6f*
    • This article presents the argument that the lack of a detailed definition for the term “personalized learning” has created problems for understanding the concept, and for implementing personalized learning curriculum. 
  • Pearson & EdSurge. (2016). Decoding adaptive. https://d3e7x39d4i7wbe.cloudfront.net/static_assets/PearsonDecodingAdaptiveWeb2.pdf*
    • This report investigates three questions. First, what is adaptive learning? Second, what is inside the “black box” of adaptive learning?” Third, how to adaptive learning tools on the market differ? It is vital that these questions are answered if these technologies are to improve teaching and learning.
  • Selwyn, N. (2016). Is technology good for education? John Wiley & Sons.*
    • This book challenges the notion that rapid digitalization of education is a net positive thing, arguing that we should question who stands to gain from this digitalization, and what is lost when educators convert to these methods.
  • Watters, A. (2017, June 9). The histories of personalized elearning. Hackeducation. http://hackeducation.com/2017/06/09/personalization*
    • This article challenges the notion that emerging technology in education represents a wholly new phenomenon, by providing a history of personalized learning that spans over decades.
  • Williamson, B (2018). The hidden architecture of higher education: Building a big data infrastructure for the ‘smarter university.’ International Journal of Educational Technology in Higher Education, 15(1). https://doi.org/10.1186/s41239-018-0094-1*
    • This article examines a major data infrastructure program in the Higher Education in the United Kingdom, examining how the program imagines an ideal of the smart university, while reform occurs through marketization.
  • Williamson, B. (2016). Digital education governance: Data visualization, predictive analytics, and ‘real-time’ policy instruments. Journal of Education Policy31(2), 123-141. https://doi.org/10.1080/02680939.2015.1035758
    • This article maps digital policy implementation in education. It provides two case studies on new digital data systems: The Learning Curve from Pearson Education, and learning analysis platforms that track student performance using their digital date to predict outcomes.
  • Williamson, B. (2016). Digital education governance: An introduction. Sage Journals, 15(1), 3-13. https://doi.org/10.1177%2F1474904115616630
    • This article provides an introduction to issues relating to digital education government, including the trend of governing through data, globalization of educational policy, accountability, global comparison and benchmarking, and emerging local, national, and international goals.
  • Wilson, A., Watson, C., Thompson, T. L., Drew, V., & Doyle, S. (2017). Learning analytics: Challenges and limitations. Teaching in Higher Education22(8), 991-1007. https://doi.org/10.1080/13562517.2017.1332026
    • This article raises concerns about the increased use of learning analytics in higher education for adults, laying out potential problems. The authors posit their own analytic framework that is based in sociometrical pedagogy.
  • Zeide, Elana. (2017). The structural consequences of big data-driven education. Big Data, 5(2), 164–72. https://doi.org/10.1089/big.2016.0061*
    • This article examines how data-driven tool change how schools make pedagogical decisions, fundamentally changing aspects of the education enterprise in the United States.

Chapter 43. Algorithms and the Social Organization of Work (Ifeoma Ajunwa and Rachel Schlund)⬆︎

  • Ajunwa, I., Crawford, K., & Ford, J. S. (2016). Health and big data: An ethical framework for health information collection by corporate wellness programs. The Journal of Law, Medicine & Ethics44(3), 474-480. https://doi.org/10.1177%2F1073110516667943
    • This essay discusses the manner in which data collection is “being utilized in wellness programs and the potential negative impact on the worker in regards to privacy and employment discrimination.” It is argued that ethical issues can be addressed “by committing to the well-settled ethical principles of informed consent, accountability, and fair use of personal health information data.” Furthermore, innovative approaches to wellness are offered that might allow for healthcare cost reduction.
  • Ajunwa, I. (2018). Algorithms at work: Productivity monitoring applications and wearable technology as the new data-centric research agenda for employment and labor law. Saint Louis University Law Journal, 63(1), 21-54.*
    • This article argues that the emergence of productivity monitoring applications and wearable technologies will lead to new legal issues for employment and labor law. These issues include concerns over privacy, unlawful employment discrimination, worker safety, and workers’ compensation. It is argued that the emergence of productivity monitoring applications will result in a conflict between the employer’s pecuniary interests and the privacy interests of the employees. The article ends by discussing future research for privacy law scholars in dealing with employee privacy and the collection and use of employee data.
  • Ajunwa, I. (2019). Age discrimination by platforms. Berkeley Journal of Employment and Labor Law, 40(1), 1-28.*
    • This article examines the manner in which platforms in the workplace might enable, facilitate, or contribute to age discrimination in employment. It discusses the legal difficulties in dealing with such practices, namely, meeting the burden of proof and assigning liability in cases where the platform acts as an intermediary. The article proceeds by offering a three-part proposal to combat the age discrimination that accompanies platform authoritarianism.
  • Ajunwa, I. (2020 Forthcoming). The paradox of automation as anti-bias intervention. Cardozo Law Review, 41.*
    • This article rejects the mistaken understanding of algorithmic bias as a technical issue. Instead, it is argued that the introduction of bias in the hiring process derives largely in part from an American legal tradition of deference to employers. The article discusses novel approaches that might be used to make employers and designers of algorithmic hiring systems liable for employment discrimination. In particular, the doctrine of discrimination per se is offered, which interprets an employer’s failure to audit and correct automated hiring platforms for disparate impact as prima facie evidence of discriminatory intent.
  • Ajunwa, I., & Greene, D. (2019). Platforms at work: Automated hiring platforms and other new intermediaries in the organization of work. Research in the Sociology of Work, 33(1),61-91.*
    • This chapter discusses the manner in which tools provided by the sociology of work might be used to study work platforms, such as automated hiring platforms. The authors highlight five core affordances that work platforms offer employers and discuss how they combine to create a managerial frame in which workers are viewed as fungible human capital. Focus is given to the coercive nature of work platforms and the asymmetrical flow of information that favors the interests of employers.
  • Boulding, W., Staelin, R., Ehret, M., & Johnston, W. J. (2005). A customer relationship management roadmap: What is known, potential pitfalls, and where to go. Journal of marketing69(4), 155-166. https://doi.org/10.1509%2Fjmkg.2005.69.4.155*
    • This article asserts that customer relationship management (CRM) is the result of the “continuing evolution and integration of marketing ideas and newly available data, technologies, and organizational forms…” It is predicted that CRM will continue to evolve as new ideas and technologies are incorporated into CRM activities. The article discusses what is known about CRM, the potential pitfalls and unknowns faced by its implementation, and offers recommendations for further research.
  • Brown, E. A. (2016). The fitbit fault line: Two proposals to protect health and fitness data at work. Yale Journal of Health Policy, Law and Ethics, 16(1), 1-50.
    • This article argues that federal law does not adequately protect employees’ health and fitness data from potential misuse; moreover, employers are incentivized to use such data when making significant decisions, such as hiring and promotions. The article offers two remedies for the improper use of health and fitness data. First, the enactment and enforcement by the Federal Trade Commission of a mandatory privacy labelling law for health-related devices and apps would improve employee control over their health data. Second, the Health Insurance Portability and Accountability Act of 1996 can extend its protections to the health-related data that employers may acquire about their employees.
  • Chen, L., Ma, R., Hannák, A., & Wilson, C. (2018). Investigating the impact of gender on rank in resume search engines. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-14).
    • This work examines gender-based inequalities in the context of resume search engines, understood as “tools that allow recruiters to proactively search for candidates based on keywords and filters.” It focuses on the ranking algorithms used by three major hiring websites, namely, Indeed, Monster, and CareerBuilder. The examination concludes that “the ranking algorithms used by all three hiring sites do not use candidates’ inferred gender as a feature,” but there was “significant and consistent group unfairness against feminine candidates in roughly 1/3 of the job titles” examined.
  • Chung, C. F., Gorm, N., Shklovski, I. A., & Munson, S. (2017). Finding the right fit: Understanding health tracking in workplace wellness programs. In Proceedings of the 2017 CHI conference on human factors in computing systems (pp. 4875-4886).
    • This paper uses empirical data to gain an understanding of “employee experiences and attitudes towards health tracking in workplace health and wellness programs.” It is found that employees are concerned predominantly with program fit rather than privacy. The paper also highlights a gap between a holistic understanding of health and the easily measurable features with which workplace programs are concerned.
  • Citron, D., Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1-34.
    • Predictive algorithms use data to rank and rate individuals. This article argues that overseeing such systems should be a critical aim of the legal system. Certain protections need to be implemented, such as allowing regulators to test scoring systems to ensure fairness and accuracy and providing individuals an opportunity to challenge decisions based on scores that mischaracterize them. It is argued that absent such protections, the adoption of predictive algorithms risks producing stigmatizing scores on the basis of biased data.
  • Danna, A., & Gandy, O. H. (2002). All that glitters is not gold: Digging beneath the surface of data mining. Journal of Business Ethics40(4), 373-386. https://doi.org/10.1023/A:1020845814009
    • This article examines the manner in which data mining technologies are applied in the market and the social concerns that arise in response to the application of such technologies in the public and private sectors. It is argued that, “at the very least, consumers should be informed of the ways in which information about them will be used to determine the opportunities, prices, and levels of service they can expect to enjoy in their future relations with a firm.” The Kantian principle of “universal acceptability” and the Rawlsian principles of special regard for those who are least advantaged are offered to guide the development of data mining and consumer profiles.
  • Fort, T. L., Raymond, A. H., & Shackelford, S. J. (2016). The Angel on your shoulder: Prompting employees to do the right thing through the use of wearables. Northwestern Journal of     Technology and Intellectual Property14(2), 139-170.
    • This article examines the use of wearables as personal information gathering devices that feed into larger data sets. It is argued that cybersecurity and privacy guidelines, such as those offered by the European Data Protection Supervisor and the 2014 National Institute of Standards and Technology Cybersecurity Framework, should be implemented from the bottom-up in order to regulate the use of personal data.          
  • Greenbaum, J. M. (2004). Windows on the workplace: Technology, jobs and the organization of  office work (2nd ed.). Monthly Review Press.*
    • This book discusses the changes that occurred from the 1950’s to the present in management policies, work organization, and the design of office information systems. Focusing on the experiences of office workers, the book highlights the manner in which technologies have been used by employers to increase profits and gain control over workers.
  • Greenbaum, D. (2016). Ethical, legal and social concerns relating to exoskeletons. ACM SIGCAS Computers and Society45(3), 234-239.
    • This paper provides an overview of the issues surrounding the emergence of exoskeletons. The paper aims to “provide anticipatory expert opinion that can provide regulatory and legal support for this technology, and perhaps even course-correction if necessary, before the technology becomes ingrained in society.”
  • Hull, G., & Pasquale, F. (2018). Toward a critical theory of corporate wellness. BioSocieties13(1), 190-212. https://doi.org/10.1057/s41292-017-0064-1
    • Employee wellness programs aim to incentivize and supervise healthy employee behaviors; however, there is little evidence that such programs increase productivity or profit. This article analyzes employee wellness programs as “providing an opportunity for employers to exercise increasing control over their employees.” The article concludes by arguing that a renewed commitment to public health programs occluded by the private sector’s focus on wellness programs would constitute a better investment of resources.
  • Kim, P., & Scott, S. (2019). Discrimination in online employment recruiting. St. Louis University Law Journal63(1), 93-118.
    • This article examines the question of when employers should be liable for discrimination based on their online recruiting strategies. It discusses the extent to which existing law can address concerns over discriminatory advertising, and it notes the often-overlooked provisions forbidding discriminatory advertising practices found in Title VII of the Civil Rights Act of 1964 and the Age Discrimination in Employment Act. The article concludes that existing doctrine is suited to address highly problematic advertising practices; however, it remains uncertain the extent to which current law can address all practices with discriminatory effects.
  • Nissenbaum, H., & Patterson, H. (2016). Biosensing in context: Health privacy in a connected world. In D. Nafus (Ed.), Quantified: Biosensing technologies in everyday life (pp. 79-100). MIT Press.*
    • The emergence of novel information flows that accompany new health self-tracking practices create vulnerabilities for individual users and society. This chapter argues that such vulnerabilities implicate privacy. Consequently, the authors contend that information flows that accompany new health self-tracking practices “are best evaluated according to the ends, purposes, and values of the contexts in which they are embedded.”
  • Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.*
    • This book discusses the manner in which corporations use large swaths of data to pursue profits. The use of such data is surrounded by secrecy, making it difficult to discern whether or not the interests of individuals are being protected. It is argued that the decisions made by firms using data should be fair, non-discriminatory, and open to criticism. This requires eliminating the secrecy surrounding current practices and increasing the accountability of those using such data to make important decisions. 
  • Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring:  Evaluating claims and practices. Proceedings of the 2020 Conference on Fairness,  Accountability, and Transparency, 469-481.
    • This work conducts an in-depth analysis of the bias-related practices of vendors of algorithmic pre-employment assessments by examining the vendors’ publicly available statements. It is found that it is important to consider technical systems together with the context surrounding their use and deployment. The work concludes by offering several policy recommendations intended to reduce the risk of bias in the systems under consideration.
  • Srnicek, N. (2017). Platform capitalism. John Wiley & Sons.*
    • This book critically examines the emergence of platform capitalism, which is understood as the emergence of platform-based businesses.  The book offers an analysis of the growth of platform capitalism in the broader history of capitalism’s development. It highlights the manner in which a small number of platform-based businesses are transforming the contemporary economy and how such businesses will need to adapt in the future in order to ensure sustainability.
  • Shoshana, Z. (1988). In the age of the smart machine: The future of work and power. Basic Books.*
    • This book discusses the computerization of the workplace and the manner in which it affects the work experience of labor and management. One of the concepts the book introduces is that of Informating, which is understood as a process unique to information technology through which digitalization translates activities, objects, and events into information.
  • Williams, J. D., Lopez, D., Shafto, P., & Lee, K. (2019). Technological workforce and its impact on algorithmic justice in politics. Customer Needs and Solutions6(3), 84-91.   https://doi.org/10.1007/s40547-019-00103-3
    • This paper argues that diversifying the workforce in the tech industry and incorporating inter-disciplinary education, such as principles of ethical coding, can help remedy the negative consequences of algorithmic bias. Allowing the diverse perspectives of tech employees to influence the development of algorithms will result in systems that incorporate a broad range of world views, and such systems are less likely to overlook the experiences of those belonging to groups that have been historically underrepresented.

Chapter 44. Smart City Ethics: How “Smart” Challenges Democratic Governance (Ellen P. Goodman)⬆︎

  • Brauneis, R., & Goodman, E. P. (2018). Algorithmic transparency for the smart city. Yale J.L. & Tech., 20, 103. *
    • This article examines the limits of transparency around governmental deployment of big data analytics. The authors critique the opacity of governmental predictive algorithms, and analyze predictive algorithm programs in local and state governments to test how impenetrable resulting black boxes are and assess whether open records processes would enable citizens to discover the policy judgements embodied by algorithms. The authors propose a framework for sufficient algorithm transparency for governments and public agencies.
  • Brooks, B. A., & Schrubbe, A. (2016). The need for a digitally inclusive smart city governance framework. University of Missouri-Kansas City Law Review, 85, 943.
    • This article examines how smart cities in urban and rural areas effectively create and deploy open data platforms for citizens, and analyzes the considerations and differing governance mechanisms for rural cities compared to urban cities. The authors examine several cases of municipal smart technology adoption to explore policy options to distribute resources that address citizen needs in those areas.
  • Cardullo, P., Kitchin, R., & Di Feliciantonio, C. (2018). Living labs and vacancy in the neoliberal city. Cities73, 44-50.
    • This paper evaluates the role of living labs (LL) – technologies that foster local digital innovation to “solve” local issues – in the context of smart cities. The authors outline various approaches to LL, and argue that LLs are actively used to bolster smart city discourse.
  • Edwards, L. (2016). Privacy, security and data protection in smart cities: A critical EU law perspective. European Data Protection Law Review, 2(1), 28-58.
    • This paper argues that smart cities combine the three greatest threats to personal privacy: the Internet of Things, Big Data, and the Cloud. Edwards notes that current regulatory frameworks fail to effectively address these threats, and discusses how and if EU data protection laws control these possible threats to personal privacy.
  • Goodspeed, R. (2015). Smart cities: Moving beyond urban cybernetics to tackle wicked problems. Cambridge Journal of Regions, Economy and Society, 8(1), 79-92.
    • This paper aims to describe institutions for municipal innovation and IT-enabled collaborative planning to address “wicked”, or inherently political, problems. The author proposes that smart cities, which use IT to pursue efficient systems through real-time monitoring and control, are equivalent to the idea of urban cybernetics debated in the 1970s. Drawing on Rio de Janeiro’s Operations Center, the author argues that wicked urban problems require solutions that involve local innovation and stakeholder participation.
  • Halpern, O., LeCavalier, J., Calvillo, N., & Pietsch, W. (2013). Test-bed urbanism. Public Culture, 25(2), 272-306.
    • This essay by Halpern et al. interrogates how ubiquitous computing infrastructures produce new forms of experimentation with urban territory. These protocols of “test-bed urbanism” are new methods for spatial development that are changing the form, function, economy, and administration of urban life.
  • Karvonen, A., Cugurullo, F., & Caprotti, F. (Eds.). (2018). Inside smart cities: Place, politics and urban innovation. Routledge.*
    • This article explores the tensions within second-generation smart city experiments such as Barcelona. The article maps the shift from first-generation to second-generation policies developed by Barcelona’s liberal government and explores how concepts of technological sovereignty emerged. The authors reflect on the central tenants, potentialities, and limits of Barcelona’s Digital Plan and examine how the city’s new digital paradigm can address pressing urban challenges.
  • Kitchin, R. (2014). The real-time city? Big data and smart urbanism. GeoJournal, 79(1), 1-14.
    • Kitchin’s article draws on various examples of pervasive and ubiquitous computing in smart cities to detail how urban spaces are being instrumented with Big Data-producing digital devices and infrastructure. While smart city advocates argue that Big Data can provide material for envisioning and enacting more efficient, sustainable, productive, and transparent cities, Kitchin aims to critically reflect on the implications of big data and smart urbanism by analyzing five emerging concerns: the politics of big urban data, technocratic governance and city development, corporatization of city governance, hackable cities, and the panoptic city.
  • Kitchin, R., Cardullo, P., & Di Feliciantonio, C. (2018). Citizenship, justice and the right to the smart city. In P. Cardullo, C. Di Feliciantonio and R. Kitchin (Eds.), The right to the smart city (pp. 1-24). Emerald Publishing Limited. *
    • This article engages the smart city in various practical, political, and normative questions relating to citizenship, social justice, and the public good. The authors detail some troubling ethical issues associated with smart city technologies and examine how citizens have been conceived and operationalized in the smart city, proposing that the “right to the smart city” should be a fundamental principle of smart city endeavors.
  • Kitchin, R., & Dodge, M. (2019). The (in) security of smart cities: Vulnerabilities, risks, mitigation, and prevention. Journal of Urban Technology, 26(2), 47-65. *
    • This article seeks to examine how smart city technologies that are designed to produce urban resilience and reduce risk paradoxically create new vulnerabilities in city infrastructure and threaten to open extended forms of criminal activity. Through identifying forms of smart city vulnerabilities and detailing several examples of urban cyberattacks, the authors analyze existing smart city risk mitigation strategies and propose a set of systemic interventions that extends beyond technical solutions.
  • Marvin, S., Luque-Ayala, A., & McFarlane, C. (Eds.). (2015). Smart urbanism: Utopian vision or false dawn? Routledge.
    • This book critically assesses “smart urbanism” – the rebuilding of cities through the integration of digital technologies with neighborhoods, infrastructures, and people as a unique panacea to contemporary urban challenges. The authors explore what new capabilities are created by smart urbanism, by whom, and with what exclusions, as well as the material and social consequences of technological development and application. The book aims to identify and convene researchers, commentators, software developers, and uses within and outside mainstream smart urbanism discourses to assess which urban problems can be addressed by smart technology.
  • McFarlane, C., & Söderström, O. (2017). On alternative smart cities: From a technology-intensive to a knowledge-intensive smart urbanism. City, 21(3-4), 312-328. *
    • This article explores the influence of corporate-led urban development in the smart urbanism agenda. Drawing on critical urban scholarship and initiatives across the Global North and South, the author examines steps towards an alternative smart urbanism where urban priorities and justice drive the use or lack of use of technology.
  • Morozov, E., & Bria, F. (2018). Rethinking the smart city. Rosa Luxemburg Stiftung.
    • This article provides a political-economic analysis of smart city development to critique the promises of cheap and effective smart city solutions to social and political problems. The authors propose that the smart city can only be understood within the context of neoliberalism as public city infrastructure and services are managed by private companies, thereby de-centralizing and de-personalizing the political sphere. In response, the authors offer alternative smart city models that rely on democratic data ownership regimes, grassroots innovation, and cooperative service provision models.
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
    • This book aims to reveal how mathematical models used today are opaque, unregulated, uncontestable, and reinforce discrimination. The author reveals how black box models shape individual and collective futures and undermine democracy by exacerbating existing inequalities, and calls on engineers and policymakers more responsibly develop and regulate the use of algorithms.
  • Shelton, T., Zook, M., & Wiig, A. (2015). The ‘actually existing smart city’. Cambridge Journal of Regions, Economy and Society, 8(1), 13-25. *
    • This paper aims to ground critiques of the smart city in a historical and geographic context. The authors closely focus on smart city policies in Louisville and Philadelphia (examples of “actually existing” smart cities rather than exceptional, paradigmatic centers such as Songdo or Masdar) to analyze how these policies arose and their unequal impact on the urban landscape. The authors argue that an uncritical, ahistorical, and aspatial understanding of data presents a problematic approach data-driven governance and the smart city imaginary.
  • Söderström, O., Paasche, T., & Klauser, F. (2014). Smart cities as corporate storytelling. City, 18(3), 307-320. *
    • This article examines corporate visibility and legitimacy in the smart city market. Drawing on actor-network theory and critical planning theory, this paper analyzes how IBM’s smarter city campaign tells a story aimed at making the company an obligatory passage point in the implementation of urban technologies and calls for the creation of alternative smart city stories.
  • Townsend, A. M. (2013). Smart cities: Big data, civic hackers, and the quest for a new utopia. WW Norton & Company.
    • This book explores the history of urban information technologies to trace how cities have used and continue to use evolving technology to address increasingly complex policy challenges. The author analyzes the mass interconnected networks of contemporary metropolitan centers, drawing from examples of smart technology applications in cities around the world to document and examine emerging techno-urban landscapes. The author illuminates the motivations, aspirations, and shortcomings of various smart city stakeholders including entrepreneurs, municipal government officials, and software developers and investigates how these actors shape the urban futures.
  • Vanolo, A. (2014). Smartmentality: The smart city as disciplinary strategy. Urban Studies, 51(5), 883-898. *
    • This article analyzes the power and knowledge implications of smart city policies that support new ways of imagining, organizing, and managing the city while impressing a new moral order to distinguish between the “good” and “bad” city. The author uses smart city politics in Italy as a case study to examine how smart city discourse has produced new visions of the “good city” and the role of private actors and citizens in urban management development.
  • Wiig, A. (2018). Secure the city, revitalize the zone: Smart urbanization in Camden, New Jersey. Environment and Planning C: Politics and Space, 36(3), 403-422.*
    • This paper analyzes the impacts of smart city agendas aligning with neoliberal urban revitalization efforts by examining redevelopment efforts in Camden, New Jersey. The author analyzes how Camden’s citywide multi-instrument surveillance network contributed to policing strategies that controlled the circulation of residents and prioritized the flow of capital into spatially bounded zones. The author underscores the crucial role of this surveillance-driven policing strategy in shifting the narrative of Camden from disenfranchised to economically and politically viable.
  • Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118-136.
    • This article argues that the analytic phenomenon of “Big Data” can be understood as a mode of “design based” regulation. Yeung draws on regulatory governance scholarship to argue that algorithmic decision-guidance techniques rely upon the use of “hypernudging” to powerfully and continuously alter people’s behavior in a predictable way without forbidding any options or changing economic incentives. This analysis of design-based control intersects with worrying implications for democracy as concerns are not satisfactorily resolved through relying on individual notice and consent.

User’s Note⬆︎

  • An asterisk (*) after a reference indicates that it is included among the Further Readings listed at the end of the Handbook chapter.
  • This annotated bibliography is the result of an ongoing collaboration among faculty and students affiliated with the Ethics of AI Lab, Centre for Ethics, University of Toronto. Contributing editors include:
    • 2019-20: Tyler Biswurm, Stacy Chen, Amelia Eaton (lead editor), Stephanie Fielding, Suzanne van Geuns, Vinyas Harish, Chris Hill, Tobias Hobbins, Chris Longley, Liam McCoy, Nishila Mehta, Unnati Patel, Faye Shamshuddin, and Chelsea Tao (special thanks to Julius Dubber)