Chapter 1. The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation (Joanna J. Bryson)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.1
- Adams-Prassl, J. (2022). Regulating algorithms at work: Lessons for a ‘European approach to artificial intelligence’. European Labour Law Journal, 13(1), 30–50.
- This paper discusses legal frameworks for regulating the use of automation and AI in workplace settings and highlights how the AI Act proposed in 2021 suffers from some of the same weaknesses as some of the regulatory frameworks already present. The author raises three prominent areas of concern: data collection; forms of data processing, which can lead to discrimination; and algorithmic discrimination and control due to an absence of transparency. The author showcases that the 2019 Platform to Business Regulation (P2B), 2016 General Data Protection Regulation (GDPR), and the 2018 Directive of Transparent and Predictable Working Conditions (TPWC) all address these concerns to some degree. The shortcomings of these acts can aid in the formation of developing legislation such as the AI Act.
- Askell, A., et al. (2019). The role of cooperation in responsible AI development. arXiv:1907.04534
- The authors argue that the present AI industry ecosystem does not provide companies with either market or regulatory incentives for responsible AI development. They propose solutions that approach responsible AI development from the perspective of a collective action problem, which aim to overcome the negative effects of the competitive AI market by facilitating greater cooperation and oversight among development companies.
- Avin, S., et al. (2021). Filling gaps in trustworthy development of AI. arXiv:2112.07773
- This article offers a combination of industry standards, government interventions, and policy mechanisms which would improve the trustworthiness of AI systems and mitigate their potential for harm. The authors argue that principles such as improving transparency, involving third-party monitors, and protecting user privacy will regulate AI development in a way that ensures greater trust from the public.
- Boden, M., et al. (2010).* Principles of robotics. Engineering and Physical Sciences Research Council (EPSRC). https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/
- This document proposes a set of five ethical rules to guide the designers, builders, and users of robots. The rules were formulated with the purpose of introducing robots in a manner that inspires public trust and confidence, maximizes potential benefits, and avoids unintended consequences. The authors assert that human designers and users—and not robots themselves—are the appropriate subjects of robotics regulation because robots are tools which are not ultimately responsible for their actions.
- Brundage, M., et al. (2018).* The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. University of Oxford, Future of Humanity Institute, University of Cambridge, Centre for the Study of Existential Risk, Center for a New American Security, Electronic Frontier Foundation, OpenAI. https://arxiv.org/abs/1802.07228
- This report surveys the landscape of potential digital, physical, and political security threats from malicious uses of AI and proposes ways to better forecast, prevent, and mitigate these threats. The authors focus on identifying the sorts of attacks that are likely to emerge if adequate defenses are not developed and recommend a broad spectrum of effective approaches to face them.
- Brundage, M., et al. (2020). Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv:2004.07213
- This policy paper, authored by representatives of stakeholders from across academia, industry, and civil society, proposes a range of institutional and technical mechanisms to enable the verification of claims concerning AI development. These mechanisms and the accompanying policy recommendations aim to provide more reliable methods for ensuring safety, security, fairness, and privacy protection in AI systems.
- Bryson, J. J. (2019).* The past decade and future of AI’s impact on society. In Towards a new enlightenment?: A transcendent decade (pp. 127–159). Openmind BBVA/Turner. https://www.bbvaopenmind.com/en/articles/the-past-decade-and-future-of-ais-impact-on-society/
- This article reflects on the AI-induced social and economic changes that happened in the decade after smartphones were introduced in 2007, projects from this analysis to predict imminent political, economic, and personal challenges, and submits corresponding policy recommendations. The author argues that AI is less novel than is often assumed, and that its familiar challenges can be managed with appropriate regulation.
- Bryson, J. J., et al. (2017).* Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291. https://doi.org/10.1007/s10506-017-9214-9
- This article argues that conferring legal personhood on synthetic entities is morally unnecessary and legally problematic. The authors highlight the adverse consequences of certain noteworthy precedents and conclude that while giving AI legal personhood may have emotional or economic appeal, the difficulties of holding rights-violating synthetic entities to account outweigh these dubious considerations.
- Cadwalladr, C. (2018, March 18).* ‘I made Steve Bannon’s psychological warfare tool’: Meet the data war whistleblower. The Guardian. https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump
- This news article presents a profile of Christopher Wylie, the former research director of Cambridge Analytica who blew the whistle on the company’s illicit data-collection practices and influence campaign in the 2016 US presidential election.
- Cath, C., et al. (2017). Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethics. https://doi.org/10.1007/s11948-017-9901-7
- This article provides a comparative assessment of reports issued by the White House, the European Parliament, and the UK House of Commons to outline their respective visions on how to prepare society for the widespread use of artificial intelligence. To follow up their critiques, the authors propose two supplementary measures: first, the creation of an international advisory council, and second, a commitment to ground visions of a “good AI society” on human dignity.
- Clark, J., & Hadfield, G. K. (2019). Regulatory markets for AI safety. arXiv:2001.00078
- This paper proposes a model of AI regulation whereby governments create a regulatory market requiring companies building or deploying AI to purchase private regulatory services. In this model, governments would directly license private regulators, with the goal of creating a market for regulatory services in which private regulators compete for the business of companies building and deploying AI.
- Claxton, G. (2015).* Intelligence in the flesh: Why your mind needs your body much more than it thinks. Yale University Press.
- This book argues—based on work in neuroscience, psychology, and philosophy—that human intelligence emanates from the body instead of the mind. With reference to examples like the endocrine system, the author asserts that the body performs intelligent computations that people either overlook or falsely attribute to the brain. The author contends that the mind’s undeserved esteem has led to perverse social outcomes like the preference for white-collar over blue-collar labor.
- Cohen, J. E. (2013).* What privacy is for. Harvard Law Review, 126(7), 1904–1933. JSTOR. www.jstor.org/stable/23415061
- This article argues that privacy—contrary to its image as an outdated, anti-progressive, and hence inessential ideal—is an essential precondition for people to be self-determining. The author asserts that competing imperatives like national security, efficiency, and entrepreneurship have been permitted to trample over privacy because it is perceived as an optional benefit to the inherent self-determining capacity of liberal agents. By contrast, the author asserts that the self is socially constructed, and that privacy is therefore an essential personal shield against the perverse influence tactics of commercial and government actors.
- Dennett, D. C. (1978).* Why you can’t make a computer that feels pain. In Brainstorms: Philosophical essays on mind and psychology (1st ed., pp. 190–229). Bradford Books.
- This essay argues that the ordinary concept of pain is incoherent and thus that any candidate for a pain-capable robot would be rejected by human judges because its experience would contradict at least one of our various intuitions about pain. The author accepts that it is possible in principle for a robot to experience pain, and for humans to accept that it does, if a better physiological theory of pain is developed.
- Dwivedi, Y. K., et al. (2021). Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
- This article collects contributions from an interdisciplinary group of researchers on the social, economic, and intellectual impact of AI on various domains, including industry, government, science, the humanities, and law. Each perspective discusses the challenges, opportunities, and research agenda for AI in the relevant domain.
- Guihot, M., et al. (2017). Nudging robots: Innovative solutions to regulate artificial intelligence. Vanderbilt Journal of Entertainment & Technology Law, 20(2), 385–456.
- This article argues that public regulators can overcome the obstacles to their control of artificial intelligence (e.g. scarce public resources and the power of technology companies) and remedy the technology’s dangerous under-regulation by adopting a predictive two-step process: first, they would signal expectations to influence or “nudge” AI designers; and second, they would participate in and interact with relevant industries. These steps would permit regulators to gain expertise, competently assess risks, and develop appropriate regulatory priorities.
- Gunkel, D. J. (2018).* Robot rights. MIT Press.
- This book explores whether and to what extent robots can and should have rights. The author evaluates, analyzes, and ultimately rejects four key positions on this question before offering an alternative way of conceptualizing the social situation of robots and the implications they have for existing moral and legal systems.
- Hervey, M., & Lavy, M. (2021). The law of artificial intelligence. Sweet & Maxwell.
- This book examines how existing civil and criminal law will apply to AI and explores the role of emerging laws designed specifically for regulating and harnessing AI. The topics covered in this book include liability arising in connection with the use of AI, the impact of AI on intellectual property, data protection, smart contracts, and the deployment of AI in legal services and the justice system.
- Hüttermann, M. (2012).* DevOps for developers. Apress/Springer.
- This book presents a practical introduction to “DevOps,” which is a set of practices that aim to streamline the software delivery process by fostering collaboration between software development and IT operations.
- Katyal, S. K. (2019). Private accountability in the age of artificial intelligence. UCLA Law Review, 66(1), 54–141.
- This article argues that artificial intelligence raises novel civil rights concerns whose resolution requires augmenting public regulation with private industry standards. The author contends that private industry standards, including codes of conduct, impact statements, and whistleblower protection, represent a new generation of accountability measures which have the potential to outperform ordinary regulation in civil rights enforcement.
- Kearns, M., & Roth, A. (2019). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.
- This book seeks to illustrate how human values can be integrated into algorithmic systems. The authors discuss issues concerning data privacy, algorithmic fairness, feedback loops, data-driven scientific research, and additional ethical issues, including transparency and accountability. The authors highlight important trade offs facing the development and governance of algorithmic systems and offer policy proposals in response.
- Kroll, J. A., et al. (2017).* Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633–705. https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3
- The authors argue that foundational computer science techniques provide an optimal way of holding automated decision systems accountable, including in scenarios where outdated accountability mechanisms and legal standards fail to. According to them, these techniques avoid the limitations of more popular proposals like source code and input transparency. The article suggests that using them may even improve the governance of decision-making in general.
- List, C., & Pettit, P. (2011).* Group agency: The possibility, design, and status of corporate agents. Oxford University Press.
- The authors argue that group agents like companies, churches, and states are irreducible to the individual agents that constitute them, and that any legitimate approach to the social sciences, law, morality, and politics must take account of this fact. The book is grounded in ideas from social choice theory, economics, and philosophy.
- Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38. https://doi.org/10.1016/j.artint.2018.07.007
- This article aims to situate the emerging field of explainable AI and algorithmic transparency within the broader context of social science research on human explanation. The author argues that explainable AI should build on this existing research, particularly insights from philosophy, social psychology, and cognitive science.
- Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.0089
- The author argues that the principles of democracy, rule of law, and human rights must be incorporated into AI by design and proposes a practical framework to guide this practice. According to the author, this practice is necessary to maintain the strength of constitutional democracy because (a) AI will eventually govern core functions of society and (b) the decoupling of technology from constitutional principles has already precipitated illegal and undemocratic behavior. The author considers which of AI’s challenges can be regulated by ethical norms and which demand the force of law.
- OECD. (2019).* Recommendations of the council on artificial intelligence, OECD/LEGAL/0449 (No. 0449; OECD Legal Instruments). Organisation for Economic Co-operation and Development. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
- This document presents the first intergovernmental standard on artificial intelligence. It aims to foster innovation and trust in AI by promoting responsible stewardship as well as human rights and democratic values. The document presents policy recommendations which include the “OECD Principles on AI”: (1) inclusive growth, sustainable development, and well-being; (2) human-centered values and fairness; (3) transparency and explainability; (4) robustness, security, and safety; and (5) accountability.
- O’Neill, O. (2002).* A question of trust: The BBC Reith lectures 2002. Cambridge University Press.
- This series of lectures explores whether modern democratic society’s debilitating “crisis of trust” can be solved by making people and institutions more accountable. Among other subjects, the lectures investigate whether the complex systems behind customary approaches to accountability improve or actually damage trust.
- O’Reilly, T. (2017).* WTF: What’s the future and why it’s up to us. Harper Business.
- This book argues that humans, and not machines, control the ultimate outcomes of technological progress. According to the author, current concerns about AI are misplaced because they focus on futuristic hypotheticals instead of the currently pressing—and crucially, familiar—problems that perverse market incentives drive tech companies to instigate. For instance, the author contemplates how markets incentivize corporations to use technology for cost-cutting efficiency instead of meaningful innovation.
- Palmieri, S. (2021). Inevitable influences: AI-based medical devices at the intersection of medical devices regulation and the proposal for AI regulation. European Journal of Health Law, 28(4), 341–358.
- This paper discusses the European Commission’s regulation of artificial intelligence in the medical field. The paper discusses that the legal apparatus regulating AI-based medical devices comes from the Medical Devices Regulatory framework as well as AI regulation proposed in April 2021. The Medical Devices regulation ensures that devices on the market adhere to established standards by classifying them into different classes of risk and ensures the safety of the users. The AI regulation is similar and is an important contribution to the regulation of AI in the medical field.
- Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Harvard University Press.
- Recalling Isaac Asimov’s Three Laws of Robotics, this book proposes four new laws for governing AI. First, AI should complement professionals, not replace them. Second, AI should not counterfeit humanity. Third, AI should not intensify zero-sum arms races. Fourth, AI must always indicate the identity of their creator(s), controller(s), and owner(s). The author presents examples and case studies in healthcare, education, media, and other domains to support these new laws for the governance of AI.
- Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215. https://doi.org/10.1038/s42256-019-0048-x
- The author contends that the current trend of attempting to explain the behavior and decisions of black box machine learning models is deeply flawed and potentially harmful. The author supports this contention by drawing on examples from healthcare, criminal justice, and computer vision, and proceeds to offer an alternative approach: building models that are not opaque, but inherently interpretable.
- Santoni de Sio, F., & van den Hoven, J. (2018).* Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5(15). https://doi.org/10.3389/frobt.2018.00015
- The authors argue that there are two necessary conditions to implement “meaningful human control” over an autonomous system: (1) a “tracking” condition, under which the system must be responsive to the moral reasoning of its human designers and deployers and to morally relevant facts in its environment, and (2) a “tracing” condition, under which the system’s actions must always be attributable to at least one of its human designers or operators. The authors note that the principle of meaningful human control (and human moral responsibility) has gained traction as a solution to the “responsibility gap” created by autonomous systems.
- Shanahan, M. (2015).* The technological singularity. MIT Press.
- This book explores the idea and implications of the “singularity”: the hypothetical event in which humans will be overtaken by artificial intelligence or enhanced biological intelligence. The author imagines and interrogates a range of possible scenarios for the event, including the possibility of superintelligent machines which challenge the ordinary concepts of personhood, responsibility, rights, and identity.
- Sipser, M. (2006).* Introduction to the theory of computation (2nd ed.). Thomson Course Technology.
- This textbook provides a comprehensive and approachable introduction to topics in theoretical computational theory. The author conveys the fundamental mathematical properties of computer hardware, software, and applications with a blend of practical, philosophical, and mathematical discussion.
- Smuha, N. A. (2021). From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for artificial intelligence. Law, Innovation and Technology, 13(1), 57–84.
- This paper discusses how the race to develop AI has spurred a race to develop regulations for trustworthy AI as well. The author proposes the idea of “regulatory competition,” where competition in developing regulation between different entities leads to the emergence of the best measures. The author argues that the competition will lead to the creation of regulatory frameworks as countries that develop such frameworks first will gain an advantage and since each country wants to be the international standard in regulation. They conclude that although this can have drawbacks, such competition could be beneficial for the creation of these frameworks.
- Wachter, S., et al. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005
- This article argues that the European General Data Protection Regulation (GDPR) does not—contrary to popular interpretation—afford a “right to explanation” of automated decision-making, and that its regulatory force is therefore diminished. According to the author, the defect is attributable to the legislation’s imprecise language and lack of well-defined rights and safeguards. The article recommends a series of specific legislative steps to improve the GDPR’s adequacy in this area.
- Wachter, S., et al. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology (Harvard JOLT), 31(2), 841–888.
- This article argues that many of the significant limitations of algorithmic interpretability and accountability can be overcome by pursuing explanations which help data subjects act on, instead of understanding, automated decisions. The authors propose three aims for explanations which serve this purpose: (1) to convey the rationale of the decision, (2) to provide grounds to contest the decision, and (3) to suggest viable steps to achieving a more favorable future decision. The authors assert that counterfactuals are an ideal means of explaining automated decisions because they satisfy these aims.
- Wong, A. (2020). The laws and regulation of AI and autonomous systems. In L. Strous, R. Johnson, D. Grier, & D. Swade (Eds.), Unimagined futures – ICT opportunities and challenges (pp. 38–54). Springer. https://doi.org/10.1007/978-3-030-64246-4_4
- This paper provides a brief exploration of some of the prominent legal questions surrounding the regulation of AI along with some of the prominent regulatory frameworks. The author discusses automation in the workplace and its impact on labor and employment, intellectual property and AI, data ownership, big data, and personhood for AI. The author also highlights some regulatory frameworks to combat each of these challenges such as the General Data Protection Regulation (GDPR) and other European legislation.
Chapter 2. The Ethics of the Ethics of AI (Thomas M. Powers and Jean-Gabriel Ganascia)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.2
- Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge University Press. https://doi.org/10.1017/CBO9780511978036
- This edited volume presents essays which consider, among other subjects: why it is necessary to implement ethical capacities in autonomous machines, what is required for their implementation, potential approaches to their implementation, as well as philosophical and practical challenges to the study of machine ethics.
- Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, & Human Values, 41(1), 93–117.
- In this article Ananny outlines an agenda for a sociotechnical ethics of algorithms, which approaches algorithms as “assemblages of institutionally situated code, practices, and norms.” He outlines three dimensions for scrutinizing algorithms’ ethics (a) the ability to convene people by inferring associations from computational data, (b) the power to judge similarity and suggest probable actions, and (c) the capacity to organize time and influence when action happens. Ananny argues that these offer starting points for holding algorithmic assemblages accountable.
- Arkin, R. C. (2009).* Governing lethal behavior in autonomous robots. Chapman & Hall/CRC Press.
- This book considers how to develop autonomous robots which use lethal force ethically. Contemplating the possibility of robots being more humane than humans on the battlefield, the author examines the philosophical basis, motivation, and theory of ethical control systems in robots, and presents related design recommendations.
- Awad, E., et al. (2018).* The Moral Machine experiment. Nature, 563(7729), 59–64. https://doi.org/10.1038/s41586-018-0637-6
- This article describes the results of deploying an online experimental platform, the Moral Machine, to generate a large global dataset aggregating real human responses to the moral dilemmas faced by autonomous vehicles. It presents findings on global and regional moral preferences, as well as findings on demographic and culture-dependent variations in moral preferences. The authors discuss how these findings can contribute to developing global, socially acceptable principles for machine ethics.
- Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
- This book argues that if machines surpass humans in general intelligence, then superintelligent machines could replace humans as the dominant lifeform on Earth. The book imagines various paths along which this event transpires and considers how humans could anticipate and manage the existential threat it poses.
- Brey, P. A. E. (2012).* Anticipatory ethics for emerging technologies. NanoEthics, 6(1), 1–13. https://doi.org/10.1007/s11569-012-0141-7
- This article presents an original approach, “anticipatory technology ethics” (ATE), to the ethics of emerging technology. It evaluates alternative approaches and formulates ATE in their context. The author argues that uncertainty is a central obstacle to the ethical analysis of emerging technology, and therefore that forecasting- and prediction-oriented approaches are necessary to reach useful ethical conclusions about emerging technology.
- Brundage, M. (2014). Limitations and risks of machine ethics. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 355–372. https://doi.org/10.1080/0952813X.2014.895108
- This article argues that “machine ethics” remains inadequate in terms of achieving its intended social outcomes. The author discusses several inherent limitations that prohibit machine ethics from making any guarantee of ethical machine behavior. These limitations involve the computational limits of AI, and the nature of ethical decision accountability, and consequences as situated within the context of complex real-world settings. The article contends that even if the technical challenges of machine ethics were resolved, the machine ethics concept would likely still retain a degree of inadequacy.
- Cave, S., et al. (2019). Motivations and risks of machine ethics. Proceedings of the IEEE, 107(3), 562–574. https://doi.org/10.1109/JPROC.2018.2865996
- This article surveys reasons for and against pursuing machine ethics, here understood as research aiming to build “ethical machines.” Clarifying some of the philosophical issues surrounding the field and its goals, the authors ask several foundational questions about the opportunities and risks that machine ethics presents. For example, under what conditions is a given moral reasoning system likely to enhance the ethical alignment of machines and, more importantly, under what conditions are such systems likely to fail? How might machines deal adequately with value pluralism, especially in cases for which a single, definite answer is inappropriate? If conditions exist that would justify granting a form of moral status (e.g., moral patiency) to (suitably advanced) machines, how might automated ethical reasoning threaten to undermine human moral responsibility?
- Crawford, K. (2021) Atlas of AI. Yale University Press.
- Informed by perspectives from Science and Technology Studies (STS), law, and political philosophy, this book characterizes AI as a technology of extraction, from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from its human users. The author assesses AI in terms of a web of power relations that are dynamically reshaping how and why particular questions raised by AI ethics are prioritized.
- Dehaene, S., et al. (2017).* What is consciousness, and could machines have it? Science, 358(6362), 486–492. https://doi.org/10.1126/science.aan8871
- This article argues that despite recent advances in artificial intelligence, current machines predominantly perform computations that reflect basic unconscious processing (“C0”) in the human brain. The article contends that the standard for synthetic consciousness must be the human brain, and that since machines don’t perform computations which are comparable to conscious human processing (“C1” and “C2”), they can’t be called conscious.
- Dennett, D. C. (1987).* The intentional stance. MIT Press.
- This book argues that entities understand and anticipate one another’s behavior by adopting a predictive strategy of interpretation—the “intentional stance”—that treats the entity under examination as if it were a rational agent which makes choices based on its beliefs and desires. According to this argument, entities which adopt the intentional stance reason deductively from hypotheses about their subject’s beliefs and desires to conclude what they ought to decide in a given situation and from there predict what they will actually do in that situation.
- Etzioni, A., & Etzioni, O. (2017).* Incorporating ethics into artificial intelligence. The Journal of Ethics, 21(4), 403–418. https://doi.org/10.1007/s10892-017-9252-2
- This article argues that it is unnecessary to confer moral autonomy on artificially intelligent machines because we can readily guarantee ethical behavior from them by programming them with the existing instructions of law and their owners’ individual moral preferences. According to the article, many of the moral decisions facing AI machines are not discretionary and therefore easily automated because they are dictated by law. In cases where the decisions are discretionary, the article proposes that AI machines “read” and adhere to their owner’s moral preferences.
- Greene, D., et al. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2019.258
- Using frame analysis to examine recent high-profile value statements endorsing ethical design for artificial intelligence and machine learning, this conference paper draws two broad conclusions. The first conclusion is that these value statements assume a deterministic vision of Artificial Intelligence/Machine Learning (AI/ML), the ethics of which are best addressed through technical and design expertise. Therefore, there is no meaningful indication in these statements that AI/ML can be limited or constrained. Secondly, while the ethical design parameters suggested by these statements echo processual elements and contextual framing of critical methodologies in science and technology studies (STS), these statements lack this critical scholarship’s explicit focus upon normative ends devoted to social justice or equitable human flourishing. Rather, the “moral background” of these statements appears to be closer to a version of conventional business ethics than to more radical traditions of social and political justice active today.
- Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. MIT Press.
- This book examines what selected decision criteria, categories, and framings reveal about the embedded intentions of moral status ascription in AI. Arguments that consider machines as potential moral agents perform acts of exclusion or inclusion in attributing or denying moral agency or moral patiency. Such actions, the author states, involve assumptions of anthropocentricity that effectively designate the machine as an instrument of human use. This book presents a philosophical, Levinas-infused approach toward the construction of machine ethics and its associated categories.
- Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120. https://doi.org/10.1007/s11023-020-09517-8
- This paper points to the ongoing separation between the abstract values of ethics and the technical discourses of actual implementation. This disciplinary gap, the author observes, characterizes a field that ultimately lacks strong reinforcement mechanisms, as AI ethics in some settings is pursued as an aspect of a marketing strategy rather than as a fundamental design principle. The author states that it is necessary to build tangible bridges between abstract values and technical implementations. However, a focus on technological phenomena should not dilute or preclude a focus on genuinely social aspects and on practitioner self-responsibility.
- Hoffmann, C. H., & Hahn, B. (2020). Decentered ethics in the machine era and guidance for AI regulation. AI & Society, 35(3), 635–644. https://doi.org/10.1007/s00146-019-00920-z
- This paper asserts that while a large number of AI ethics guidelines appear to propose concrete ideas, few proposals are philosophically sound. Guidelines worded by government, non-profit communications, or marketing-focused departments might not have considered philosophical or practical implications fully. Viewing this situation critically so as to ground policy steps that are prepared to take questions of AI ethics and AI moral status into account, the authors pursue a number of broad questions, including: what are ethical AI systems? What is the moral status of AI? To what extent are machine ethics necessary? What are the implications on AI regulations? Drawing from selected specific cases, this paper is meant to serve as a point of departure for the development of philosophically informed policies.
- Horty, J. F. (2001).* Agency and deontic logic. Oxford University Press.
- This book develops deontic logic—that is, the logic of ethical concepts like obligation and permission—against the background of a formal theory of agency. It rejects the common assumption that what an agent ought to do is the same as what it ought to be that the agent does. By drawing on elements of decision theory, the book presents an alternative and novel account of what agents and groups of agents ought to do under various conditions and over extended periods of time.
- Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–99. https://doi.org/10.1038/s42256-019-0088-2
- Through a review of grey literature and soft-law documents, the authors map and analyze a corpus of principles and guidelines on ethical AI. Global convergence emerges around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility, and privacy). However, the authors state there is substantive divergence regarding how these principles are interpreted, why they are deemed important, which issues, domains or actors they pertain to, and how they should be implemented. This paper’s findings highlight the importance of integrating guideline-development efforts with both substantive ethical analysis and adequate implementation strategies.
- Kurzweil, R. (2006).* The singularity is near: When humans transcend biology. Penguin Books.
- This book envisions a “singularity” event in which humans merge with machines. It speculates that, by overcoming biological limitations, the combination of human and machine abilities will solve exigent problems like the inevitability of death, environmental degradation, and world hunger. The book goes further to consider the broader social and philosophical consequences of this paradigm shift.
- Lin, P., et al. (Eds.). (2017).* Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.
- This edited volume, aimed at academic audiences, policymakers, and the broader public, presents a global and interdisciplinary collection of essays that focuses on emerging issues in the interdisciplinary field of “robot ethics.” This field studies the effects of robotics on ethics, law, and policy. Organized into four parts, the first concerns moral and legal responsibility, and questions that arise in programming under moral uncertainty. The second part addresses anthropomorphizing design, and related issues of trust and deception within human-robot interactions. A third section concerns applications ranging from love to war. The fourth section speculates upon the possible implications and dangers of artificial beings that exhibit superhuman mental capacities.
- Lokhorst, G-J. C. (2011). Computational meta-Ethics: Towards the meta-ethical robot. Minds & Machines, 21, 261–274. https://doi.org/10.1007/s11023-011-9229-z
- Drawing from the same tools as have been used in computational metaphysics, this paper presents a proof of concept for a meta-ethical robot. This proof of concept serves as an opening to other pathways and themes that are mentioned as potential next steps in building a robot with the capacity to reason about its own reasoning. A robot with an extensive set of rules at its disposal can operate with great success within complex pattern matching tasks but will not exhibit meta-ethical capacities that the author argues could define the next phase of ethical robots.
- Mabaso, B.A. (2020). Computationally rational agents can be moral agents. Ethics and Information Technology. https://doi.org/10.1007/s10676-020-09527-1
- This article advances an argument and model for artificial moral agency based on a framework of computational rationality. Asserting that the capacities required for artificial moral agency, as well as the aspects of functional consciousness that underpin them are computable, the author proposes a conceptual model for a bounded-optimal, computationally rational artificial moral agent. The author states that computational rationality can be an integrative element that can combine both the scientific and philosophical elements of artificial moral agency in a logically consistent way.
- Metcalf, J., et al. (2019). Owning ethics: Corporate logics, Silicon Valley, and the institutionalization of ethics. Social Research: An International Quarterly, 86(2), 449–476.
- This article presents the results of a qualitative study examining corporate efforts to advance ‘tech ethics’. It suggests that the endeavour is bureaucratically challenging, variously falling under the domains of legal review, human resources, engineering practices, and business models and strategy. “Ethics owners” are those tasked with “domain-jumping”, resolving debates about human values and the core logics of the tech industry, while also being fully embedded within them. The authors observed that ethics owners’ efforts are often hampered by ‘technological solutionism’, ‘beliefs in meritocracy’, and ‘market fundamentalism’.
- Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4
- This article argues that the principled approach upon which AI ethics has converged is unlikely to succeed like its close analogue in medical ethics. This is because, compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. The article cautions against validating any newly emerging consensus around principles of AI ethics.
- Powers, T. M. (2006).* Prospects for a Kantian machine. IEEE Intelligent Systems, 21(4), 46–51. https://doi.org/10.1109/MIS.2006.77
- Discussing the potential of creating ethical machines based on rule-based ethical theories like Kantian ethics, this article suggests three accounts of a deontological ethical machine. The author states that many view rule-based ethical theories as promising for machine ethics because their judgements exhibit a computational structure. The author identifies challenges with each account proposed, including issues of triviality, asymmetry, excessive specificity, lack of semidecidability, and lack of priority for maxims. While difficult to surmount in the computational environment, the author points out that similar difficulties emerge in human attempts to engage in practical reasoning.
- Rességuier, A., & Rodrigues, R. (2020). AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society, 7(2), https://doi.org/10.1177/2053951720942541.
- In this short commentary, the authors address the claim that AI ethics in its current instantiation is largely ineffective, and that ethical principles are especially prone to manipulation by the tech industry which leverages them to evade regulation. They suggest that AI ethics is being asked to do what it cannot do (i.e., regulation), and that it is better suited to “seeing the new”, or attending to how the world changes, including those changes brought about by artificial intelligence.
- Segun, S. T. (2021). From machine ethics to computational ethics. AI & Society, 36, 263–276. https://doi.org/10.1007/s00146-020-01010-1
- This paper argues that the appellation ‘machine ethics’ does not sufficiently capture the project of embedding ethics into the computational environment. The author analyzes the thematic distinction between robot ethics, machine ethics, and computational ethics, and offers a four-pronged justification as to why computational ethics presents a prima facie description of the project of embedding ethics into artificial intelligence systems. In making this case, the author categorizes attempts to program ethics into AI systems, attempts to build an artificial moral agent, and endeavors to simulate consciousness in machines as belonging under the purview of computational ethics.
- Tonkens, R. (2009). A challenge for machine ethics. Minds and Machines, 19(3), 421–438. https://doi.org/10.1007/s11023-009-9159-1
- Arguing that the development of Kantian artificial moral machines is itself anti-Kantian, this paper asserts that machine ethicists must look elsewhere for an ethic to implement into their machines. In making his case, the author approaches three main ideas of Kantian ethics: the foundations of moral agency, the role of the categorical imperative in moral decision making, and the concept of duty. The author explains how Kantian machines would not possess free will and would arguably not be viewed as ends in themselves; this creates a volitional inconsistency. In order to be treated as an end in itself, a Kantian machine would need to possess dignity, be deserving of respect by all humans (i.e., all other moral agents), and be valued as an equal member in the moral community. These conditions not being met, the proposed Kantian ethic is problematic in the machine ethics case.
- Wallach, W., & Allen, C. (2009).* Moral machines: Teaching robots right from wrong. Oxford University Press.
- This book argues that as robots are given more and more responsibility, there is a corresponding imperative to make them capable of morally aware decision-making. Approaching both distinctions and integrations of top-down and bottom-up design approaches, the authors acknowledge that the context involved in real-time moral decisions, as well as the complex intuitions people have about right and wrong, make the prospect of reducing ethics to a logically consistent principle or set of programmable laws at best suspect, and at worst irrelevant. It goes further to assert that while achieving full moral agency for machines is a distant goal, the imperative is already urgent enough to require measures which introduce basic moral considerations into robotic decision-making.
- Wallach, W., et al. (2008).* Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI & Society, 22(4), 565–582. https://doi.org/10.1007/s00146-007-0099-0
- This article outlines the values and limitations of bottom-up and top-down approaches to constructing morally intelligent artificial agents. According to the article, bottom-up approaches are characterized by the combination of subsystems into a complex assemblage which models behavior that is consistent with ethical principles. By contrast, the article explains that top-down approaches involve the direct computerization of ethical principles as prescriptive rules.
- Weinberger, D. (2011).* Too big to know: Rethinking knowledge now that the facts aren’t the facts, experts are everywhere, and the smartest person in the room is the room. Basic Books.
- This book argues that Internet Era shifts in the production, exchange, and storage of knowledge—far from signalling a systemic collapse—present a fundamental epistemic breakthrough. The book contends that although the authority of ordinary facts, books, and experts has depreciated in the transition, “networked knowledge” permits knowledge-seekers to attain better understanding and make more informed decisions.
Chapter 3. Ethical Issues in Our Relationship with Artificial Entities (Judith Donath)
https://www.doi.org/10.1093/oxfordhb/9780190067397.013.3
- Bankins, S., & Formosa, P. (2019).* When AI meets PC: Exploring the implications of workplace social robots and a human-robot psychological contract. European Journal of Work and Organizational Psychology, 29(2), 215–229. https://doi.org/10.1080/1359432x.2019.1620328
- The authors foresee the rise of social robots, known also as humanoid robots, as psychological contract partners for human employees. Through a thought experiment, the authors examine the potential impacts of psychological contracts between employees and AI technologies, including unequal receipt-benefit exchanges and the ‘deskilling’ of human employees.
- Bisconti Lucidi, P., & Nardi, D. (2018). Companion robots: The hallucinatory danger of human-robot interactions. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 17-22). Association for Computing Machinery. https://doi.org/10.1145/3278721.3278741
- Focusing mainly on robots caring for the elderly, this paper analyzes ethical concerns raised by the rise of companion robots to distinguish which concerns are directly ascribable to robotics, and which are instead pre-existent. The authors argue that one concern, the “deception objection,” namely the ethical unacceptability of deceiving the user about the simulated nature of the robot’s behavior, is inconsistently formulated. The authors’ central argument is that the main concern about companion robots is the simulation of a human-like interaction in the absence of an autonomous robotic horizon of meaning.
- Bourne, C. (2019). AI cheerleaders: Public relations, neoliberalism and artificial intelligence. Public Relations Inquiry, 8(2), 109-125.
- The article combines public relations (PR) theory, communications theory, and political economy to consider the changing shape of neoliberal capitalism as AI becomes naturalized as “common sense” and as a “public good.” The author explores how PR supports AI discourses, including promoting AI in national competitiveness and promoting “friendly” AI to consumers, while promoting Internet inequalities.
- Briggs, G., et al. (2022). Why and how robots should say ‘no.’ International Journal of Social Robotics, 14(2), 323–339. https://doi.org/10.1007/s12369-021-00780-y
- This paper argues that for AI systems to be able to consider and act upon ethical choices, they must be able to reject commands from humans. Thus, a human instructing a robot could encounter a robot’s refusal to comply in cases when human commands are seen as immoral. The authors report on findings that show that human impressions of the AI system are improved when the harshness of the phrasing of the AI’s rejection is proportional to the severity of the moral offense implied by the command.
- Broom, D. M. (2014).* Sentience and animal welfare. Centre for Agriculture and Biosciences International.
- This book focuses on sentience—the ability to feel, perceive, and experience—in order to answer questions raised by the animal welfare debate, such as whether animals experience suffering in life and death. The author defines aspects of sentience such as consciousness, memory, and emotions, and discusses brain complexity in detail. Looking at sentience from a developmental perspective, they analyze when during an individual’s growth sentience can be said to appear using evidence from a range of studies investigating embryos, fetuses, and young animals to form an overview of the subject.
- Calo, R. (2015).* Robotics and the lessons of cyberlaw. California Law Review, 103(3), 513-563.
- This article examines the implications of the introduction of robotics for cyberlaw and policy. The author argues that robotics will prove exceptional in the sense of occasioning systemic changes to law, institutions, and the legal academy. However, the author also argues that many core insights and methods of cyberlaw will prove crucial in integrating robotics.
- Carpenter, J. (2013). Just doesn’t look right: Exploring the impact of humanoid robot integration into explosive ordnance disposal teams. In R. Luppicini (Ed.), Handbook of research on technoself: Identity in a technological society (pp. 609-636). IGI Global.
- This chapter analyzes the potential short and long-term outcomes of Explosive Ordnance Disposal (EOD) specialists, who work closely with anthropomorphic robots daily at work. The author argues that even if these robots are designed to be more human-like, it may result in ethical and emotional consequences on EOD specialists.
- Ceha, J., et al. (2022). Identifying functions and behaviours of social robots for in-class learning activities: Teachers’ perspective. International Journal of Social Robotics, 14, 747–761, https://doi.org/10.1007/s12369-021-00820-7
- This article considers ways in which robots can augment elementary and middle school educational practices. The authors discuss both teacher-robot and student-robot interactions and the robot activities and behaviors that would be most effective at helping teachers in the classroom.
- Coeckelbergh, M. (2010). Artificial companions: Empathy and vulnerability mirroring in human-robot relations. Studies in Ethics, Law, and Technology, 4(3), 1-17.
- This article argues that the possibility and future of robots as companions depends, among other things, on the robots’ capacity to be a recipient of human empathy. One necessary condition for this to happen is the robots’ capacity to mirror human vulnerabilities. The author refutes the objection that vulnerability mirroring raises the ethical issue of deception. They show that the underlying assumptions to the objection cannot be easily justified, given the importance of appearance in social relations.
- Chesterman, S. (2020). Artificial intelligence and the limits of legal personality. International and Comparative Law Quarterly, 69(4), 819-844. https://doi.org/10.1017/S0020589320000366
- Many researchers argue that due to their rapid advancement, AI systems should be entitled to some sort of legal status comparable to humans. This author finds that while many legal systems are able to create such statuses, there is a lack of theoretical and empirical evidence to prove the claims supporting AI systems’ entitlement to them.
- Coghlan, S., et al. (2019). Could social robots make us kinder or crueller to humans and animals? International Journal of Social Robotics, 11(5), 741-751.
- The authors consider both arguments for and against the kind treatment of robots. They observe this debate is underexplored, and side with those who argue social robots may causally affect virtue, as such they should be treated as social beings. Their observations contribute to the fields of robot engineering and human-computer interaction.
- Colledanchise, M. (2021). A new paradigm of threats in robotics behaviors. arXiv:2103.13268
- This article assesses the various security threats that result in the advancement of robot creation. The author emphasizes the potential for humans to use the robots for malicious intent by tampering with them or manipulating the robot to deliberately do harmful or illegal things. They also found that these risks can be mitigated through task planning, skilled operators, and rescue squads.
- Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human–robot co-evolution. Frontiers in Psychology, 9, 468. https://doi.org/10.3389/fpsyg.2018.00468
- This article proposes a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction and rebuts arguments that condemn “anthropomorphism-based” social robots a priori. To address the relevant ethical issues, the authors apply an experimentally-based ethical approach to social robotics, titled “synthetic ethics.” This approach aims at allowing humans to use social robots for two key goals: self-knowledge and moral growth.
- DePaulo, B. M., et al. (1996).* Lying in everyday life. Journal of Personality and Social Psychology, 70(5), 979-995.
- This article compares two diary studies of lying, where 77 college students reported telling two lies a day, and 70 community members told one. Consistent with the view of lying as an everyday social interaction process, participants said that they did not regard their lies as serious and did not plan them much or worry about being caught. Still, social interactions in which lies were told were less pleasant and less intimate than those in which no lies were told.
- Donath, J. (2019).* The robot dog fetches for whom? In Z. Papacharissi (Ed.), A networked self and human augmentics, artificial intelligence, sentience (pp. 10-24). Routledge.
- This article examines the landscape of social robots, including robot dogs, and their effect on human empathy and relationships. Particularly, the author questions whom robot companions will truly serve in a future where they are ubiquitous.
- Eimler, S. C., et al. (2010). Prerequisites for human-agent-and human-robot interaction: Towards an integrated theory. University of Duisburg-Essen.
- This article asserts that examining the long-term effects of relationships between humans and robots is extremely important. The authors argue that this type of study is possible , only if the complexities of human-human interactions are first understood. The authors offer a robust framework to examine these interactions, including nonverbal and verbal behavior that can be applied to human-robot relationships.
- Finkel, M., & Krämer, N.C. (2022). Humanoid robots – Artificial. Human-like. Credible? Empirical comparisons of source credibility attributions between humans, humanoid robots, and non-human-like devices. International Journal of Social Robotics. https://doi.org/10.1007/s12369-022-00879-w
- The authors examine how various demographic and psychological characteristics influence attitudes towards the use of robots at work, and the evaluation of them as either tools or colleagues. The study includes factors such as gender, age, and income and concludes with policy recommendations for how to better integrate robots into human work environments and tailor their adoption to individual workers’ preferences and attitudes towards robots. The authors note that whereas some workers perceive the use of robots as a workload- and stress-reducer, others may perceive work robotization more negatively.
- Galanter, P. (2020). Towards ethical relationships with machines that make art. Artnodes, 26, 1–9. https://doi.org/10.7238/a.v0i26.3371
- This article considers a human’s relationship with AI entities which generate art, both those that are not currently sentient and those that someday might be. Viewing credit for authorship as a descriptive norm, through the lens of complexism, the obligation to credit artificial entities for authorship of art becomes less about a moral obligation to the machine and more to ourselves and accurate citation. Should machines become aware, the authors suggest that the moral concept of patiency be extended towards them, i.e., they would be deserving of moral consideration.
- Godfrey-Smith, P. (2016).* Other minds: The octopus and the evolution of intelligent life. William Collins.
- This book explores the evolution and nature of consciousness, explaining that complex active bodies that enable and require a measure of intelligence have evolved three times, in arthropods, cephalopods, and vertebrates. The author reflects on the nature of cephalopod intelligence, in particular, constrained by their short lifespan, and embodied in large part in their partly autonomous arms which contain more nerve cells than their brains.
- Hashim, R., & Yussof, H. (2017). Humanizing humanoids towards social inclusiveness for children with autism. Procedia Computer Science, 105, 359–364. https://doi.org/10.1016/j.procs.2017.01.234
- The authors conducted a literature review, interviews, and qualitative observations of teachers in Malaysia to understand the requisites for success by humanoids serving children with special needs within educational environments. Their research finds that the variables of acceptance, evidence of success, influence, attitude, viability, and compliance (religious compliance, suitability of cultural values and compliance to ethical norms) are necessary for humanizing humanoids engineered for social skill augmentation of children living with autism.
- Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20(4), 291-301.
- The animal-robot analogy is one of the most commonly used in attempting to frame interactions between humans and robots, and it also tends to push in the direction of blurring the distinction between humans and machines. The authors argue that, despite some shared characteristics, analogies with animals are misleading when it comes to thinking about the moral status of humanoid robots, legal liability, and the impact of treatment of humanoid robots on how humans treat one another.
- Kaplan, F. (2004).* Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots. International Journal of Humanoid Robotics, 1(3), 465-480.
- This article presents a preliminary exploration of several aspects of Japanese culture and a survey of the most important myths and novels involving artificial beings in Western literature. The author examines particular cultural features that may account for contemporary differences in our behavior towards humanoids.
- Kappas, A., et al. (2020). Communicating with robots: What we do wrong and what we do right in artificial social intelligence, and what we need to do better. In R. J. Sternberg & A. Kostić (Eds.), Social intelligence and nonverbal communication (pp. 233-254). Palgrave Macmillan.
- This chapter discusses the challenges and pitfalls regarding the interaction of humans and machines with a view to (artificial) social intelligence and a time of challenging interdisciplinary research. The authors present concrete examples of such research and point out lacunae in empirical data.
- Kaminsky, I. (2021). Do robots dream of escaping? Narrativity and ethics in Alex Garland’s Ex-Machina and Luke Scott’s Morgan. AI & Society, 36(1), 349–359. https://doi.org/10.1007/s00146-020-01031-w
- This paper addresses the dilemma of whether or not the humanoids in the films Ex-Machina (2014) and Morgan (2016) possess high levels of artificial consciousness. The author considers the question of whether humanoids are capable of possessing an inner life, and decides that if they are, then they are to be treated as moral agents endowed with the same rights and considerations as humans.
- Kontogiorgos, D., et al. (2020). Embodiment effects in interactions with failing robots. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/10.1145/3313831.3376372
- The authors performed a study to examine how the presentation of a robot affects a person’s desire to frequently interact with it after a task failure. Users perceived human-like robots more favorably than smart-speaker robots, rating them higher in intelligence and, unlike the smart-speakers, there was no decrease in users’ desire to interact with the human-like robots after a failure occurs in an interaction.
- Nyholm, L., et al. (2021). Users’ ambivalent sense of security with humanoid robots in healthcare. Informatics for Health and Social Care, 46(2), 218–226. https://doi.org/10.1080/17538157.2021.1883027
- This article shares the results of a study testing users’ sense of security with humanoid robots in healthcare. Employing twelve semi-structured interviews with five women and seven men aged 24-77, the authors found four broad opinions: humanoid robots are both reliable and unreliable, humanoid robots are both safe and unsafe, humanoid robots are both likable and scary, and humanoid robots are both caring and uncaring. These findings can suggest priorities for engineers to meet in creating humanoid robots in healthcare that mitigate found concerns relating to patients’ sense of security.
- Nyholm, S., & Smids, J. (2019). Can a robot be a good colleague? Science and Engineering Ethics. https://doi.org/10.1007/s11948-019-00172-6
- This article compares the question of whether robots can be good colleagues to the more widely discussed questions of whether robots can be our friends or romantic partners. The author argues that on a behavioral level, robots can fulfill many of the criteria typically associated with being a good colleague. The author further asserts that in comparison with the more demanding ideals of being a good friend or a good romantic partner, it is comparatively easier for a robot to live up to the ideal of being a good colleague.
- Parviainen, J., & Rantala, J. (2022). Chatbot breakthrough in the 2020s? An ethical reflection on the trend of automated consultations in health care. Medicine, Health Care and Philosophy, 25(1), 61–71. https://doi.org/10.1007/s11019-021-10049-w
- This article considers changing ethical norms and impacts of integrating AI chatbots into healthcare delivery in the wake of the COVID-19 global pandemic. The authors argue for a change to the traditional approaches in professional ethics in healthcare to account for this technological wave, and they argue overall in favor of the use of chatbots in healthcare, while considering numerous arguments for and against their use.
- Remmers, P. (2019). The ethical significance of human likeness in robotics and AI. Ethics in Progress, 10(2), 52-67.
- This article argues that there are no serious ethical issues involved in the theoretical aspects of technological human likeness, suggesting instead that although human likeness may not be ethically significant on the philosophical and conceptual levels, strategies to use anthropomorphism in the technological design of human-machine collaborations are ethically significant. This is because artificial agents are specifically designed to be treated in ways we usually treat humans.
- Remmers, P. (2021). The artificial nature of social robots: A phenomenological interpretation of two conflicting tendencies in human-robot interaction. In M. Nørskov, J. Seibt, & O. S. Quick (Eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020 (pp. 78-85). IOS Press.
- This article argues that the effects of anthropomorphism or zoomorphism (designing robots as subjects of agency) motivate two opposing tendencies within the ethics of robots and their relationship to humans. The first being the ‘rational’ tendency, which denounces anthropomorphism, claiming it is merely an illusion, and the second being the ‘visionary’ tendency, or the relational reality between humans and robots. The author claims that this contention cannot be mediated through an analogy of the treatment of robots and the perception of objects utilizing a dominant theory of image perception.
- Renzullo, D. (2019). Anthropomorphized AI as capitalist agents: The price we pay for familiarity. Montreal AI Ethics Institute. https://montrealethics.ai/anthropomorphized-ai-as-capitalist-agents-the-price-we-pay-for-familiarity/
- This report argues that the anthropomorphic design of AI technology is used as a channel to perpetuate capitalism through a process of social acclimatization. AI is designed to elicit attachment so that the technology becomes integrated and relied upon in everyday life.
- Schmetkamp, S. (2020). Understanding A.I. — Can and should we empathize with robots? Review of Philosophy and Psychology, 11(4), 881–897. https://doi.org/10.1007/s13164-020-00473-x
- The author explores epistemological and normative approaches to gauge the scope and limits of empathy with robots. The author finds that comparing robots to fictional characters increases the capacity to empathize with humanoids, and argues that we should do so to acquire knowledge on a new perspective for personal and research purposes.
- Singer, P. (2011).* Practical ethics. Cambridge University Press.
- This book is a classic introduction to the study of practical ethics. The focus of the book is the application of ethics to difficult and controversial social questions: equality and discrimination by race, sex, ability, or species; abortion, euthanasia, and embryo experimentation; the moral status of animals; political violence and civil disobedience; overseas aid and the obligation to assist others; responsibility for the environment; the treatment of refugees. The book is structured to show how contemporary controversies often have deep philosophical roots, and presents a unique ethical theory that can be applied consistently to all the practical cases.
- Sheridan, T. B. (2020). A review of recent research in social robotics. Current Opinion in Psychology. https://doi.org/10.1016/j.copsyc.2020.01.003
- This review finds that both because of its newness and because of its narrower psychological rather than technological emphasis, research in social robotics tends currently to be concentrated in a single journal and single annual conference. This review categorizes such research into three areas: (a) Affect, Personality, and Adaptation; (2) Sensing and Control for Action; and (3) Assistance to the Elderly and Handicapped.
- Turing, A. (1950).* Computing machinery and intelligence. Mind, 49, 433-460.
- This is a seminal paper on the topic of artificial intelligence, the first to introduce Alan Turing’s concept of what is now known as the Turing Test to the general public. The article investigates the prospects of a tool emerging that would be able to flawlessly function as a spatial extension of the human intellect.
- Turkle, S. (2007).* Authenticity in the age of digital companions. Interaction Studies, 8(3), 501-517.
- This paper examines watershed moments in the history of human–machine interaction, focusing on the pertinence of relational artifacts to our collective perception of aliveness, life’s purposes, and the implications of relational artifacts for relationships. The paper argues that the exploration of human–robot encounters leads to questions about the morality of creating believable digital companions that are evocative but not authentic.
- Vanman, E. J., & Kappas, A. (2019). “Danger, Will Robinson!” The challenges of social robots for intergroup relations. Social and Personality Psychology Compass, 13(8). https://doi.org/10.1111/spc3.12489
- This article explores the paradox created by human-like robots, as they simultaneously generate greater empathy than traditional robots while also eliciting greater suspicion, particularly about their ability to deceive. Discussing these findings from an intergroup relations perspective, the authors propose three research questions that the authors believe social psychologists are ideally suited to address.
- Wallkötter, S., et al. (2020). A robot by any other frame: Framing and behaviour influence mind perception in virtual but not real-world environments. In T. Belpaeme & J. Young (Eds.), Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 609-618). Association for Computing Machinery.
- In a series of three experiments relating to mind perception in human-robot interaction, the authors identify two factors that could potentially influence perception and moral concern in robots: how the robot is introduced (framing), and how the robot acts (social behavior). They find that both framing and behavior independently influence participants’ mind perception. However, when both variables were combined in a real-world experiment, these effects failed to replicate, resulting in a third perennial factor: the online versus real-world nature of the interactions.
- Weber-Guskar, E. (2021). How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners. Ethics and Information Technology, 23(4), 601–610. https://doi.org/10.1007/s10676-021-09598-8
- This paper considers the morality of establishing an affective emotional relationship with an EAI system – an AI system designed to elicit and recognize human emotions, and sometimes, to simulate them. The author argues against the intuitive sense that such relationships are problematic, rebutting three arguments, from “self-deception,” “lack of mutuality,” and “moral negligence.” The author’s defense of affective relationships with EAI rests on the arguments that humans need not be decieved into believing that AI systems have emotions to desire an affective relationship with them, that mutual emotional engagement is not necessary for a good relationship (such as in the case of parent-infant and human-animal relationships), and that even if relationships will EAI will skew people’s moral landscapes, the skew is not necessarily immoral.
- Weizenbaum, J. (1976).* Computer power and human reason. W. H. Freeman and Company.
- This book examines the sources of the computer’s powers and offers evaluative explorations of what computers can do, cannot do, and should not be employed to do. The author argues that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities, such as compassion and wisdom, that are necessary for genuine choice to take place.
- Weizenbaum, J. (1967).* Contextual understanding by computers. Communications of the ACM, 10(8), 474-480.
- This paper discusses a further development of a computer program, ELIZA, capable of conversing in natural language, stressing the importance of context to both human and machine understanding. The paper argues that the adequacy of the level of understanding achieved in a particular conversation depends on the purpose of that conversation, and that absolute understanding on the part of either humans or machines is impossible.
- Yang, J., & Chew, E. (2021). A systematic review for service humanoid robotics model in hospitality. International Journal of Social Robotics, 13(6), 1397–1410. https://doi.org/10.1007/s12369-020-00724-y
- The authors collected data from case researches and experimental interviews to find that service robots within hospitality should focus on four aspects: interaction between users and robots, artificial intelligence-based service models, user data protection, and responsibility allocation for robot management. They are optimistic about robots’ capacities to integrate into a hospitality industry where there is high demand for labor and diverse language skills.
An asterisk (*) after a reference indicates that it was included among the Further Readings listed at the end of the Handbook chapter by its author.