III. Concepts & Issues

Chapter 8. We’re Missing a Moral Framework of Justice in Artificial Intelligence: On the Limits, Failings, and Ethics of Fairness (Matthew Le Bui and Safiya Umoja Noble)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.9

  • Abdalla, M., & Abdalla, M. (2020). The grey hoodie project: Big Tobacco, Big Tech, and the threat on academic integrity. Arxiv:2009.13676
    • In this paper, the authors compare the power of Big Tech to influence academic research to that of Big Tobacco. The authors argue that, much like Big Tobacco in the past, Big Tech increasingly funds academic research, to the point where a majority of members of computer science departments in four top universities have received some form of funding from major technology companies. The authors argue that this may have implications for academic freedom and the continued development of ethical AI systems.
  • Bartoletti, I. (2020). An artificial revolution: On power, politics and AI. Black Spot Books.
    • This book suggests that AI be viewed as power with associated power structures, including structures of dominance and oppression, with impacts which extend from already-observed algorithmic chauvinism and racism. The author suggests that no simple solution exists, because of underlying issues about “what, or who, AI is for in the first place.” The author calls on media, institutions, companies, and governments to resist the oppression of AI without rejecting AI entirely.
  • Benjamin, R. (2019).* Race after technology: Abolitionist tools for the New Jim Code. Polity. https://www.ruhabenjamin.com/race-after-technology
    • Using critical race theory, this book analyzes how current technologies can and have reinforced White supremacy and increased social inequalities. The concept of “The New Jim Code” is introduced as a means of describing how a wide range of discriminatory designs can (a) encode inequity by amplifying racial hierarchies, (b) ignore and replicate social divisions, and (c) inadvertently reinforce racial biases while intending to fix them. This book concludes with an overview of conceptual strategies, including tech activism and abolitionists tools, that might be used to disrupt and rectify current and future technological design.  
  •  Binns, R. (2018).* Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 149-159). PMLR. http://proceedings.mlr.press/v81/binns18a.html
    • This article discusses contemporary issues of fairness and ethics in machine learning and artificial intelligence, arguing that these disciplines have been increasingly formalized around Enlightenment-era philosophies concerning discrimination, egalitarianism, and justice as parts of moral and political philosophy. The author concludes that the historical study of such frameworks can illuminate contemporary framings and assumptions. 
  • Birhane, A., & Van Dijk, J. (2020). Robot rights? Let’s talk about human welfare instead. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 207–213). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375855
    • This paper presents a review of current literature advancing the argument for robot rights. The authors turn away from the question of whether robots should be conferred or denied rights and instead focus on whether robots can have rights in the first place. The authors argue that robots are artifacts emerging from human mediation and, therefore, their rights should be considered in the context of power relations in global societies. They further argue that there are more pressing ethical and social issues relating to new machines, and the debate for robot rights draws necessary attention away from these important discussions.
  • Browne, S. (2015). Dark matters: On the surveillance of blackness. Duke University Press. https://www.dukeupress.edu/dark-matters
    • This book investigates surveillance practices through the conditions of blackness, showing how contemporary surveillance technologies are informed by historical racial formations, such as the policing of black lives through slavery, branding, runaway slave notices, and lantern laws. The author draws from black feminist theory, sociology, and cultural studies, to describe surveillance as a normalized material and discursive practice that reifies boundaries, bodies, and borders, using racial lines. 
  • Bucher, T. (2018). If… Then: Algorithmic power and politics. Oxford University Press. http://dx.doi.org/10.1093/oso/9780190493028.001.0001
    • This book investigates the political economy of algorithms and other recently developed informational infrastructures, such as search engines and social media. Arguing that we ‘live algorithmic lives,’ the author describes how society is shaped by the political and commercial institutions who design technology. Using case-studies to explore the materially discursive and cultural dimensions of software, the book argues that the most important aspects of algorithms are not in the details, but rather how they are used to define social and political practices. 
  • Calo, R. (2017). Artificial intelligence policy: A roadmap. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3015350
    • The author of this paper provides a literature review of recent ethical principles for artificial intelligence. However, Calo argues that these principles should be supplemented by state regulation of AI technologies. The author lays out five critical principles for AI policy that include justice and equity through inclusivity, transparent enforcement, certifications, guaranteeing the respect of privacy for those involved in the creation and development of AI, and taxation for the redistribution of wealth created by AI.
  • Chun, W. H. K. (2008).* Control and freedom power and paranoia in the age of fiber optics. MIT Press. https://mitpress.mit.edu/books/control-and-freedom
    • This book uses media archeology and visual culture studies to study the current political and technological coupling of freedom and control, by tracing the emergence of the Internet as a mass medium of communication. Deleuze and Foucault are used to ground the analysis of contemporary technologies such as webcams and facial recognition software. The author argues that the relationship between control and power on the Internet is a network, driven by sexuality and race, tracing the origins of governmental regulation online to cyberporn, and concluding that the Internet’s potential for democracy is found in the mutual exposure to others we cannot control. 
  • Clark, J., & Hadfield, G. K. (2019). Regulatory markets for AI safety. arXiv:2001.00078
    • The authors provide a review of different regulatory frameworks for AI. The authors argue that policymakers have had a slow and challenging job regulating this market because of corporate influence and a lack of technical expertise. The authors propose regulatory markets as an alternative, where third-party independent regulators will audit companies according to a set of principles set by governments and set certifications for corporations.
  • Daniels, J. (2009).* Cyber racism: White supremacy online and the new attack on civil rights. Rowman & Littlefield Publishers.
    • This book explores white supremacy on the Internet, tracing its origins from print to the online era. The author describes ‘open’ and ‘cloaked’ sites in which white supremacist organizations have translated their publications online, interviewing small groups of teenagers as they navigate and attempt to comprehend the content. The author provides an discussion of cyber racism which addresses common assumptions about the inherent democratic nature of the Internet, and its capacity as a recruitment tool for white supremacist groups. The book concludes with an analysis challenging convention about racial equity, civil rights, and the Internet. 
  • Daniels, J., et al. (2019). Advancing racial literacy in tech. Data & Society. https://datasociety.net/library/advancing-racial-literacy-in-tech/ 
    • In response to growing concerns about a lack of diversity training in the tech industry, this paper presents an overview of racial literacy practices designed for adoption by organizations. The authors discuss the role that tech products, company culture, and supply chain practices play in perpetuating structural racism, as well as strategies for capacity building grounded in intellectual understanding, emotional intelligence, and action. 
  • Davis, J. L., et al. (2021). Algorithmic reparation. Big Data & Society, 8(2), 1-12. https://doi.org/10.1177/20539517211044808
    • The authors of this paper argue that existing techniques to increase fairness in machine learning, based on mathematical techniques like classification parity or calibration standards, fall short and operate upon “algorithmic idealism” that cannot address systemic, intersectional stratifications. The authors instead suggest the practice of “algorithmic reparation” which utilizes reparative algorithms rooted in theories of intersectionality and which serve as a foundation for “building, evaluating, adjusting, and when necessary, omitting and eradicating machine learning systems.”
  • Dixon-Román, E., & Parisi, L. (2020). Data capitalism and the counter futures of ethics in artificial intelligence. Communication and the Public, 5(3–4), 116–121. https://doi.org/10.1177/2057047320972029
    • This paper considers the effect of colonial capital on the epistemological framework through which the ethics of artificial intelligence is analyzed. It argues that to address ethical and sociopolitical concerns in AI, technosocial systems must be understood in the context of data capitalism, which views data in terms of its future value. In this context, the paper rejects universal particulars or assumptions around technology, in favor of examining the diverse impacts of AI applications and allowing for more fundamental redesign of sociotechnical systems.
  • Eubanks, V. (2018).* Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. https://virginia-eubanks.com/books/
    • Considering the historic context of austerity, this book documents the use of digital technologies for distributional decision-making in social service delivery to poor and disadvantaged populations in the United States. Using ethnographic and interview methods, the author investigates the impact of automated systems such as Medicaid and Temporary Assistance for Needy Families, and electronic benefit transfer cards, stating that such systems, while expensive, are often less effective, and regularly reproduce and aggravate bias, equity disparities, and state surveillance of the poor. The author speaks to legacy system prejudice and the ‘social specs’ that underlie our decision-systems and data-sifting algorithms and offers a number of participatory design solutions including empathy through co-design, transparency, access, and control of information. 
  • Floridi, L., et al. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26, 1771-1796. https://doi.org/10.1007/s11948-020-00213-5
    • In this paper, the authors discuss seven essential factors for what they call “AI for Social Good” or “AI4SG.” These factors are: (1) the falsifiability and incremental deployment of algorithms, (2) creating safeguards against their manipulation, (3) respect for the autonomy of users, (4) transparency and explainability, (5) consent and privacy protections, (6) fairness, and (7) providing users with the capacity of making sense of what they are interacting with. 
  • Gandy, O. H. (1993).* The panoptic sort: A political economy of personal information. Westview Press. https://doi.org/10.1002/9781444395402.ch20   
    • In this book the author describes the political economy of personal information (PI), documenting the various ways in which PI is classified, sorted, stored, and capitalized upon by institutions of power. The author discusses personal privacy in the context of individual autonomy, collective agency, and bureaucratic control, describing these operations as panoptical sorting processes.
  • Greene, D., et al. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences. https://scholarspace.manoa.hawaii.edu/bitstream/10125/59651/0211.pdf   
    • This paper uses frame analysis to analyze recent high-profile value statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). The authors conclude that vision statements for ethical AI/ML, in their adoption of specific language drawn from critics of the field, have become limited, expert-driven, and technologically deterministic.
  • Hoffmann, A. L. (2019).* Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900-915. https://doi:10.1080/1369118x.2019.1573912    
    • This article critiques fairness and antidiscrimination efforts in AI, discussing how technical attempts to isolate and remove ‘bad’ data and algorithms tend to overemphasize ‘bad actors’ and ignore intersectional or broader sociotechnical contributions. The author describes how this leads to reactionary technical solutions that fail to displace the underlying logic that produces unjust hierarchies, thus failing to address justice concerns. 
  • Hoffmann, A. L. (2017). Data, technology, and gender: Thinking about (and from) trans lives. Spaces for the Future. Routledge. https://doi.org/10.4324/9780203735657-1
    • This book chapter discusses how data practices have situated and defined gender, with a particular focus on transgendered identity and online discrimination perpetuated by harmful design. The author describes how data-driven platforms are used by many transgendered activists to bring attention to the concerns of minority populations, however these platforms have also been used to promote sexism and gender inequality. 
  • Kleine, M. S., & Lucena, J. C. (2021). The world of “engineering for good”: Towards a mapping of research, teaching, and practice of engineers doing good. In Middle Atlantic ASEE Section Spring 2021 Conference. American Society for Engineering Education. https://peer.asee.org/the-world-of-engineering-for-good-towards-a-mapping-of-research-teaching-and-practice-of-engineers-doing-good
    • This paper analyzes the existing landscape of the “engineering for good” community, including the work done under a variety of similar labels in academic, corporate, and nonprofit settings. It presents a historical account of the development of the “engineering for good” movement and suggests future steps to create a community-based mapping of programs and initiatives that fall under this movement, to identify common themes, strategies, and opportunities.
  • Köstler, L., & Ossewaarde, R. (2022). The making of AI society: AI futures frames in German political and media discourses. AI & Society, 37(1), 249–263. https://doi.org/10.1007/s00146-021-01161-9
    • This paper argues that the German federal government’s framing of AI, including in its 2018 “Artificial Intelligence Strategy”, serves to uphold the status quo, including existing power dynamics and imbalances. It suggests a “close unity of politics and industry,” by which media actors echo the government’s positive and benefit-oriented rhetoric on AI, with minimal criticism. They describe a number of “frames” or narratives through which AI is presented, including “AI as key to the future,” “AI as German AI,” “AI as panacea,” and “Ethical AI as fig leaf.” The authors aim to open debate on the use of technological innovation in the creation of futures focused on public interests.
  • Krafft, P. M., et al. (2020). Defining AI in policy versus practice. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 72–78). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375835
    • This research paper focuses on the many different definitions of artificial intelligence in the policy realm. The authors argue that definitional ambiguity in AI prevents effective regulation since law and policy require consensus around practical coordination definitions. The paper presents a review of policy reports and interviews with AI practitioners about their definitions of artificial intelligence and adjacent subjects. The authors find that AI practitioners are concerned about the technology’s functionalities, while policymakers are concerned with their future applications. They conclude that this latter approach may overlook essential issues related to AI’s present conditions and its current impacts on society.
  • Lewis, T., et al. (2018).* Digital defense playbook: Community power tools for reclaiming data. Our Data Bodies.
    • Our Data Bodies is a collaborative project that combines community-based organizing, capacity-building, and academic research focused on how marginalized communities are impacted by data-based technologies. This workbook presents research findings concerning data, surveillance, and community safety, and includes education activities using co-creation methods and tools toward data justice and data access for equity. 
  • Mills, C. W. (2017).* Black rights/White wrongs: The critique of racial liberalism. Oxford University Press.
    • This book of essays focuses on racial liberalism from a historical perspective, reconceptualizing justice and fairness in the ways in which they reimagine social structures, without being limited to individualistic moral virtuosity. The author remarks on the centrality of the exclusion of liberalism in many documents and declarations and supplants liberalism’s classical individualistic social ontology for one that includes class, gender, and race. 
  • Mitroff, I. I., & Storesund, R. (2020). Techlash: The future of the socially responsible tech organization. Springer. https://doi.org/10.1007/978-3-030-43279-9
    • This book considers the “dire existential threat posed by modern technology.” It summarizes the growing backlash against technology companies known as “techlash,” centered on issues such as the monopolistic, predatory power of these companies; their unethical behaviors; their disregard for the negative consequences of their technology; and the shifting sentiment of lawmakers towards regulating these companies.
  • Noble, S. U. (2018).* Algorithms of oppression: How search engines reinforce racism. New York University Press. https://nyupress.org/9781479837243/algorithms-of-oppression/
    • This book discusses how search engines, such as Google, are embedded with racial and sexist bias, challenging the notion that they are neutral algorithms acting outside of influence from their human engineers, and emphasizing the greater social impacts created through their design. Through an analysis of text and media searches, and research on paid advertising, the author argues that the monopoly status of a small group of companies alongside vested private interests in promoting some sites over others has led to biased search algorithms that privilege whiteness and exhibit bias against people of color, particularly women.
  • Pasquale, F. (2016).* The black box society: The secret algorithms behind money and information. Harvard University Press. 
    • This book explores the social and economic impacts of developing information practices, namely the influx of ‘big data’. The author discusses how these practices have both benefited society through innovations in health care, while also causing significant disruptions to social equity, e.g. the subprime mortgage crisis of 2009. The author attributes these negative impacts to improper use of algorithms and concludes the book with several recommendations for how they might be corrected. 
  • Posada, J. (2020). The future of work is here: Toward a comprehensive approach to artificial intelligence and labour. C4eJournal: Perspectives on Ethics, The Future of Work in the Age of Automation and AI Symposium. [2020 C4eJ 56] [20 eAIj 16].
    • This commentary presents a literature review of different modes of work that shape AI algorithms. It argues that, while the development of ethical principles guiding the use of this technology is essential, such principles would not translate to enforcement mechanisms even if they consider workers. The commentary argues that current human rights frameworks already consider these types of work better than recent AI ethics principles.
  • Rea, S., et al. (2021). Cultivating ethical engineers in the age of AI and robotics: An educational cultures perspective. In IEEE International Symposium on Technology and Society. https://par.nsf.gov/biblio/10312683-cultivating-ethical-engineers-age-ai-robotics-educational-cultures-perspective
    • This paper considers the state of ethics education for engineering and computer science students, highlighting problems with the status quo and making recommendations to improve upon it. Rather than the two primary approaches to teaching ethics for technologists: (1) standalone ethics courses and (2) ethics-focused modules within primarily technical courses, the authors suggest an inquiry-based approach that has students consider ethics as an integral part of the engineering problems they must solve. They present the case study of a “Robot Ethics” class taught by authors Tom Williams and Qin Zhu that attempted to “enable students to apply moral learning lessons to their everyday, situated experiences.”
  • Schiff, D., et al. (2020). What’s next for AI ethics, policy, and governance? A global overview. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 153–158). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375804
    • This paper presents three topics of importance found in a review of eighty AI ethics documents from private companies, NGOs, and the public sector. The authors observed that these documents are driven by a motivation to gain a competitive advantage, used for strategic planning and intervention, and signal social responsibility and leadership. In assessing these documents, the authors argue that the most successful ones that engage with law and governance are specific, enforceable, and intended to be amended and updated.
  • Vaidhyanathan, S. (2018).* Antisocial media: How Facebook disconnects us and undermines democracy. Oxford University Press.
    • This book focuses on the rise and socio-political impacts of the contemporary social media platform Facebook. The author discusses the consequences of Facebook’s dominance, including the ways in which user behavior is tracked and shaped through the platform’s multifaceted operations, addressing how these practices have impacted global democratic processes such as national elections. 
  • Williams, T., & Wen, R. (2021). Human capabilities as guiding lights for the field of AI-HRI: Insights from engineering education. In AAAI-FSS, Artificial Intelligence for Human-Robot Interaction (AI-HRI) Symposia. ArXiv:2110.03026 [Cs]. http://arxiv.org/abs/2110.03026
    • This paper surveys other work on the ethics of AI, considering them through the lens of the Engineering for Social Justice (E4SJ) framework and focusing on how AI technologies improve on human capabilities. The paper provides definitions of both E4SJ and human capabilities, and questions whether abstract concepts in AI ethics such as “explainability” or trustworthiness are meaningfully in furtherance of a human capability. The authors prefer the trend in AI ethics and engineering education that moves away from considerations of moral philosophy and toward notions of power and justice. 
  • Zook, M., et al. (2017). Ten simple rules for responsible big data research. PLOS Computational Biology, 13(3). https://doi.org/10.1371/journal.pcbi.1005399
    • Acknowledging the growing size and availability of big data to researchers, the authors of this paper stress the importance of adopting ethical principles when working with large datasets, particularly as research agendas move beyond typical computational and natural sciences to include those involving human behavior, interaction and health. The paper outlines ten basic principles which focus on recognizing the human participants and complex systems contained within the datasets, making ethical questioning a part of standard workflow.

Chapter 9. Accountability in Computer Systems (Joshua A. Kroll)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.10

  • Adebayo, J., et al. (2018). Sanity checks for saliency maps. Advances in neural information processing systems, 31.
    • The authors propose an actionable methodology to evaluate the kinds of explanations a given method can and cannot provide, while noting that relying solely on visual assessment can be misleading. They demonstrate that some existing saliency methods are independent of the model and the data generating process as they fail the proposed tests due to their sensitivity to the data or model.
  • Amodei, D., et al. (2016). Concrete problems in AI safety. ArXiv:1606.06565 [Cs]. http://arxiv.org/abs/1606.06565
    • The authors discuss the accidents that can result from unintended behavior of machine learning systems. They diagnose research directions to consider in preventing reward hacking and other undesirable behavior, without the need for expensive supervision. Their stance assumes developers are responsible for taking measures in minimizing the risk of AI and ML systems during the programming and training processes. They direct suggestions for safety checks and precautions toward researchers and engineers; neither regulatory oversight exercised by government nor insurance is mentioned in discussing solutions to damages inflicted by AI and ML systems.
  • Andrews, L. (2019). Public administration, public leadership and the construction of public value in the age of the algorithm and ‘big data.’ Public Administration, 97(2), 296-310. https://doi.org/10.1098/rsta.2018.0080
    • The author provides an introduction to a special issue of the journal titled: “Governing artificial intelligence: ethical, legal and technical opportunities and challenges.” They outline recent developments in AI governance, examine how the ethical frameworks are set, and provide suggestions to further the discourse on AI policy. 
  • Arrieta, A. B., et al. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
    • The author reviews concepts related to the explainability of artificial intelligence methods. They provide a comprehensive analysis of two types of explainable artificial intelligence: one for machine learning models, and one dedicated to deep learning models. Their article aims to serve as the motivating background for a series of challenges faced by explainable artificial intelligence, such as the combination of data fusion and explainability.
  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671. http://dx.doi.org/10.15779/Z38BG31
    • The authors argue that algorithmic techniques such as data mining are only as effective as the data imported to the system, and that blind reliance on these systems may perpetuate discrimination. Further, these biases are not intentionally incorporated into the machine, making the source of discrimination difficult to present to a court. Theyexamine these concerns in light of American anti-discrimination law.
  • Bertsimas, D., & Orfanoudaki, A. (2021). Pricing algorithmic insurance. ArXiv:2106.00839 [Cs, q-Fin, Stat]. http://arxiv.org/abs/2106.00839
    • Management Professors Dimitris Bertsimas and Agni Orfanoudaki consider medical malpractice suits in the context of breast cancer detection to formulate a quantitative framework that considers and prices the risk of AI systems. They propose its implementation (as insurance) can overcome algorithmic aversion by covering damages incurred by erroneous algorithm decision-making. The authors claim their “work constitutes the first attempt to quantify the litigation risk resulting from erroneous algorithmic decision-making in the context of binary classification models”.
  • Breaux, T. D., et al. (2006).* Towards regulatory compliance: Extracting rights and obligations to align requirements with regulations. In 14th IEEE International Requirements Engineering Conference (RE’06) (pp. 49-58). IEEE.
    • The authors argue that current regulations that prescribe stakeholder rights and obligations that must be satisfied by software systems are inadequate because they are extremely ambiguous. Fields such as healthcare that are typically highly regulated require a more sophisticated system. They present a model for extracting and prioritizing rights and obligations and apply it to the U.S. Health Insurance Portability and Accountability Act. 
  • Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512.
    • The author considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, as they frequently rely on computational or machine learning algorithms. The author draws a distinction between three forms of opacity: opacity as intentional organizational secrecy, opacity as technical illiteracy, and opacity that arises from the nature of machine learning algorithms and the scale required to apply them effectively.
  • Desai, D. R., & Kroll, J. A. (2017).* Trust but verify: A guide to algorithms and the law. Harvard Journal of Law & Technology, 31(1), pp. 1-64.
    • The authors examine the problem of the potential for algorithms to be designed to create outcomes that are incompatible with what society prohibits and remain undetectable because of the complexity of their design. They challenge the common solution proposed for this problem, algorithmic transparency, arguing that calls for transparency are not compatible with computer science. Instead, they present an alternative to transparency by providing recommendations on regulation of public and private sector use of software. 
  • Du, M., et al. (2019). Techniques for interpretable machine learning. Communications of the ACM, 63(1), 68-77. http://dx.doi.org/10.1145/3359786
    • The authors of this report argue that concerns about the black box nature of algorithmic systems have limited their use in society. They provide key insights in interpretability and argue that interpretable machine learning will solve the problem of limited application. 
  • Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review, 16, 18-84. https://doi.org/10.31228/osf.io/97up
    • The authors argue that the right to an explanation, as present in the EU General Data Protection Regulation, is unlikely to remedy problems of unfairness in machine learning algorithms. They propose that a solution to algorithmic bias might be found in other parts of the GDPR, such as the right to erasure. 
  • Ehsan, U., et al. (2019). Automated rationale generation: A technique for explainable AI and its effects on human perceptions. In W.-T. Fu & S. Pan (Eds.), Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 263-274). Association for Computing Machinery. https://doi.org/10.1145/3301275.3302316
    • The authors propose generating real-time explanations of the behavior of autonomous agents by employing a computational model that learns to translate an autonomous agent’s internal state and action data representations into natural language. Using the case study of an agent playing a video game, they examine different types of explanations and the corresponding user perceptions.
  • Feigenbaum, J., et al. (2012).* Systematizing “accountability” in computer science. Technical Report YALEU/DCS/TR-1452, Yale University.
    • The authors’ report provides a systematization of approaches to accountability that have been taken in computer science research. The report categorizes these approaches along the axes of time, information, and action; within each of these axes, and identifies multiple questions of interest. The systematization contributes an articulation of the definitions that have been used in computer science (sometimes only implicitly); it also contributes a perspective on how these different approaches are related.
  • Guidotti, R., et al. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42.
    • The authors aim to provide a classification of the main problems addressed in literature with regards to the notion of explanations and the types of black box system. They aim to provide researchers with proposals of methods based on their problem definition, black box type, and desired type of explanation and put many open research questions in perspective.
  • Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics, 21(3), 619–630. https://doi.org/10.1007/s11948-014-9565-5
    • The authors explore the moral questions relating to the deployment of AVs on public roads, central among them: whether we should try to design the tort liability for car manufacturers to encourage the development and improvement of autonomous vehicles, and whether it would be morally permissible to impose liability on the user based on a duty to pay attention to the road and to intervene to avoid accidents. They suggest a tax or a mandatory insurance seems the easiest and most practical means to hold drivers of AVs collectively accountable without deterring AV production.
  • Hong, S. R., et al. (2020). Human factors in model interpretability: Industry practices, challenges, and needs. In Proceedings of the ACM on Human-Computer Interaction, 4, 1-26. https://doi.org/10.1145/3392878
    • The authors present their findings from 22 semi-structured interviews with machine learning practitioners focusing on how they conceive of, and design for, interpretability in the models they develop and deploy. Their findings suggest that model interpretability frequently involves cooperation and mental model comparison between people in different roles, as well as building trust between people and models and between different people within an organization.
  • Kaur, H., et al. (2020). Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. In R. Bernhaupt, F. Mueller, & D. Verweij (Eds.), Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-14). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376219
    • The authors use a contextual inquiry and survey to study how data scientists use interpretability tools to uncover issues that arise when building and evaluating machine learning models in practice. Their results suggest that data scientists over-trust and misuse interpretability tools. Few study participants were able to accurately describe the output of interpretability tools.
  • Kroll, J. A. (2021). Outlining traceability: A principle for operationalizing accountability in computing systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 758-771). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445937
    • The author aims to reframe the discourse on accountability and transparency by proposing a new principle: traceability. Traceability entails establishing not only how a system works but how it was created and for what purpose. Their paper shows how traceability explains why a system has particular dynamics or behaviors and examines how the principle has been articulated in existing AI principles and policy statements.
  • Kroll, J. A., et al. (2016).* Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633-706.
    • The authors challenge the dominant position in legal literature that transparency will solve the problems of incorrect, unjustified or unfair results of algorithmic decision-making. They argue that technology is creating new opportunities, subtler and more and more flexible than total transparency, to design algorithms so that they better align with legal and policy objectives.
  • Kroll, J. A. (2018).* The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.0084
    • The author argues that, contrary to the criticism that mysterious, unaccountable black-box software systems threaten to make the logic of critical decisions inscrutable, algorithms are fundamentally understandable pieces of technology. They investigate the contours of inscrutability and opacity, the way they arise from power dynamics surrounding software systems, and the value of proposed remedies from disparate disciplines, especially computer ethics and privacy by design. The author concludes that policy should not accede to the idea that some systems are of necessity inscrutable. 
  • Kumar, R. S. S., et al. (2019). Failure modes in machine learning systems. ArXiv:1911.11034 [Cs, Stat]. http://arxiv.org/abs/1911.11034
    • The authors introduce a new living taxonomy for classifying accidents (“unintentional failures” where the failure is because an ML system produces an inherently unsafe outcome) and attacks (“intentional failures” where the failure is caused by an active adversary attempting to subvert the system to attain her goals) on machine learning systems. The authors also discuss how this framework has been used by 23 external partners, standards organization, and governments, with an emphasis on how machine learning failure modes are meaningfully different from traditional software failures.
  • Lakkaraju, H., & Bastani, O. (2020). “How do I fool you?” Manipulating user trust via misleading black box explanations. In A. Markham, J. Powles, T. Walsh, & A. Washington (Eds.), Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 79-85). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375833
    • The authors explore how explanations of black box machine learning models can mislead users. To this end, they propose a theoretical framework for understanding when misleading explanations can exist, demonstrate an approach for generating potentially misleading explanations, and conduct a user study with experts from law and criminal justice to understand how misleading explanations impact user trust.
  • Miller, T. (2019).* Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.
    • The author argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. The author draws out some important findings and discusses ways that these can be infused with work on explainable artificial intelligence.
  • Mittelstadt, B., et al. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency (pp. 279-288).
    • The authors analyze the increased focus on building simplified models that help to explain how artificial intelligence machines make decisions. They then compare how models and their explanations are distinguished in the fields of sociology and philosophy. Finally, they argue that the creation of models may not be necessary, and instead, a broader approach could be utilized.
  • Mittelstadt, B. D., & Floridi, L. (2016). The ethics of big data: current and foreseeable issues in biomedical contexts. Science and Engineering Ethics 22(2), 303–341..
    • The authors systematically analyze literature concerning the ethical implications of Big Data, with particular attention given to biomedical data due to the inherent sensitivity and regulations. They provide eleven areas of concern including informed consent, privacy, ownership, epistemology and objectivity, the divides created by lack of access or resources to analyze large datasets, the dangers of ignoring group-level ethical harms, the importance of epistemology in assessing ethics, the changing nature of fiduciary relationships that become increasingly data saturated, the need to distinguish between academic and commercial practices in terms of potential harm to data subjects, future issues with ownership of intellectual property, and the difficulty of providing meaningful access rights to individual data subjects who may lack resources. 
  • Molnar, C. (2019). Interpretable machine learning. Leanpub. 
    • The author provides a guide for making black box models explainable to the average person. They provide an overview of the concept of interpretability and outline simple interpretable models. Then, the author discusses methods for interpreting black box models.  
  • Nissenbaum, H. (1996).* Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25-42.
    • The author warns of eroding accountability in computerized societies and argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, the author identifies four barriers in particular: (a) the problem of many hands, (b) the problem of bugs, (c) blaming the computer, and (d) software ownership without liability. They conclude with ideas on how to reverse this trend.
  • Pasquale, F. (2019). The second wave of algorithmic accountability. Law and Political Economy Project. https://lpeproject.org/blog/the-second-wave-of-algorithmic-accountability/
    • The author describes two distinct waves in algorithmic accountability discourse. The first wave involves accountability research and activism that target existing systems, such as demonstrating that facial recognition tools contain racial biases. The second wave aims to address more structural concerns and query whether certain systems, especially those that have harmful social and economic consequences, should be used at all.
  • Pearson, S. (2011). Toward accountability in the cloud. IEEE Internet Computing, 15(4), 64-69.
    • The author suggests that accountability will become a central concept in the cloud, and in new mechanisms meant to increase trust in cloud computing. The author then argues that a contextual approach must be applied, and a one-size fits all system avoided. 
  • Reisman, D., et al. (2018).* Algorithmic impact assessments: A practical framework for public agency accountability. AI Now Institute.
    • This report proposes an Algorithmic Impact Assessment (AIA) framework designed to support affected communities and stakeholders as they seek to assess the claims made about these systems and determine where and if their use is acceptable. The authors outline the five key elements of the framework and argue that implementing this framework will help public agencies achieve four key policy goals. 
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215. https://doi.org/10.1038/s42256-019-0048-x
    • The author contends that the current trend of attempting to explain the behavior and decisions of black box, meaning opaque, machine learning models is deeply flawed and potentially harmful. The author supports this contention by drawing on examples from healthcare, criminal justice, and computer vision, and proceeds to offer an alternative approach: building models that are not opaque, but inherently interpretable.
  • Shavell, S. (2019). On the redesign of accident liability for the world of autonomous vehicles (No. w26220; p. w26220). National Bureau of Economic Research. https://doi.org/10.3386/w26220
    • The author proposes a new form of strict liability that requires damages to be paid to the state with regard to insuring autonomous vehicles against accidents. They compare this model to the popular interest in strict manufacturer liability for AVs, which they argue would likely leave accident risks unchanged from levels seen in the absence of liability.
  • Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2). https://doi.org/10.1177%2F2053951717736335
    • The author argues that just as a conception of justice is needed for rule of law, so too is the need for the establishment of data justice. Data justice would require fairness in the way people are represented as a result of digital data production. The author proposes three pillars of international data justice.
  • Wachter, S., & Mittelstadt, B. (2019).* A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 2019(2).
    • The authors argue that Big Data analytics and artificial intelligence tend to make non-intuitive and unverifiable inferences about individual people. Big Data and AI rely on data of questionable value, which creates new opportunities for discrimination. The legal status of these decisions is also contended. They propose a new legal right to address this problem: a data protection right to reasonable inferences.
  • Weitzner, D. J., et al. (2007).* Information accountability. Technical Report MIT-CSAIL-TR-2007-034, MIT.
    • The author argues that debates over online privacy, copyright, and information policy questions have been overly dominated by the access restriction perspective. The author proposes an alternative to the “hide it or lose it” approach that currently characterizes policy compliance on the Web. They propose an alternative: to design systems that are oriented toward information accountability and appropriate use, rather than information security and access restriction.
  • Zhou, Y., & Danks, D. (2020). Different “intelligibility” for different folks. In A. Markham, J. Powles, T. Walsh, & A. Washington (Eds.), Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 194-199). https://doi.org/10.1145/3375627.3375810
    • The authors argue that model intelligibility (often called interpretability or explainability) is neither a one-size-fits-all nor an intrinsic property of a system; instead, it depends on individuals’ characteristics, preferences, and needs. They propose a taxonomy of different types of intelligibility, each of which requires the provision of different types of information to users.

Chapter 10. Transparency (Nicholas Diakopoulos)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.11

  • Alloa, E. (2018). Transparency: A magic concept of morality. In E. Alloa, & D. Thomä (Eds), Transparency, society and subjectivity: Critical perspectives, (pp. 31–32). Palgrave Macmillan.
    • This book critically engages with the idea of transparency whose ubiquitous demand stands in stark contrast to its lack of conceptual clarity. The book carefully examines this notion in its own right, traces its emergence in Early Modernity, and analyzes its omnipresence in contemporary rhetoric. 
  • Ananny, M. (2016).* Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, & Human Values, 41(1), 93-117.
    • This paper develops a definition of networked information algorithms (NIAs) as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semi-autonomous action. Starting from Merrill’s prompt to see ethics as the study of “what we ought to do,” the paper examines ethical dimensions of contemporary NIAs. Specifically, the paper develops an empirically grounded, pragmatic ethics of algorithms, through tracing an algorithmic assemblage’s power to convene constituents, suggest actions based on perceived similarity and probability, and govern the timing and timeframes of ethical action.
  • Ananny, M., & Crawford, K. (2018).* Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989.
    • This article critically interrogates the ideal of transparency, tracing some of its roots in scientific and sociotechnical epistemological cultures and presents 10 limitations to its application. The article argues that transparency is inadequate for understanding and governing algorithmic systems and sketches an alternative typology of algorithmic accountability grounded in constructive engagements with the limitations of transparency ideals.
  • Blacklaws, C. (2018). Algorithms: Transparency and accountability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128). https://doi.org/10.1098/rsta.2017.0351
    • This opinion piece explores the issues of accountability and transparency in relation to the growing use of machine learning algorithms. Citing the recent work of the Royal Society and the British Academy, it looks at the legal protections for individuals afforded by the EU General Data Protection Regulation and asks whether the legal system will be able to adapt to rapid technological change. It concludes by calling for continuing debate that is itself accountable, transparent, and public.
  • Brkan, M. (2019). Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond. International Journal of Law and Information Technology, 27(2), 91-121.
    • The purpose of this article is to analyze the rules of the General Data Protection Regulation (GDPR) and the Directive on Data Protection in Criminal Matters on automated decision-making and to explore how to ensure transparency of such decisions. In particular, those taken with the help of algorithms. While the Directive on Data Protection in Criminal Matters does not seem to give the data subject the possibility to familiarize herself with the reasons for such a decision, the GDPR obliges the controller to provide the data subject with ‘meaningful information about the logic involved’ (Articles 13(2)(f), 14(2)(g) and 15(1)(h)), thus raising the much-debated question whether the data subject should be granted a ‘right to explanation’ of the automated decision. This article goes beyond the semantic question of whether this right should be designated as the ‘right to explanation’ and argues that the GDPR obliges the controller to inform the data subject of the reasons why an automated decision was taken. 
  • Brundage, M., et al. (2020). Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv:2004.07213.
    • This paper makes concrete suggestions on how to improve the verifiability of claims made about AI systems and their development processes, in a way that enables the developers of such systems to be held accountable and lets outside organizations effectively scrutinize the aforementioned claims. Some existing mechanisms for this purpose are analyzed and recommendations are made to improve them. 
  • Cath, C. (2018).* Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.0080
    • This paper is the introduction to the special issue entitled “Governing artificial intelligence: ethical, legal and technical opportunities and challenges.” The issue addresses how AI can be designed and governed to be accountable, fair and transparent. Eight authors present in-depth analyses of the ethical, legal-regulatory, and technical challenges posed by developing governance regimes for AI systems.
  • Citron, D. K., & Pasquale, F. (2014).* The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1-35.
    • This paper argues that procedural regularity is essential for those stigmatized by artificially intelligent scoring systems and that the American due process tradition should inform basic safeguards in this regard. It argues that regulators should be able to test scoring systems to ensure their fairness and accuracy and that individuals should be given meaningful opportunities to challenge adverse decisions based on scoring systems. 
  • Coglianese C., & Lehr D. (2019).  Transparency and algorithmic governance. Administrative Law Review, 71(1), 1–56.
    • This paper argues that the black-box nature of some machine learning algorithms does not pose a legal hardship for their use by government authorities. Legal standards of transparency are weaker than what may be expected by users. Additionally, there is an important distinction to be made between predictions which are determinative of final actions and those that are not. Most applications of machine learning by government authorities are not determinative in the sense that they help inform decisions, but ultimately do not dictate the final outcome. This supporting role minimizes the risk of harm.
  • D’Amour, A., et al. (2020). Fairness is not static: Deeper understanding of long term fairness via simulation studies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 525–534). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372878
    • This paper highlights the shortfalls of typical approaches to ensuring algorithmic fairness across different populations. The main result provides evidence that policies which may initially achieve fairness in a short-term static setting fail to do so in the long-term. The authors design a new software package used to simulate dynamic interactions between a machine learning model’s predictions and the populations its decisions affect. The overall message is that the real-world implementation of machine learning models is vastly different from typical static supervised learning settings, in which their performance is often evaluated for the sake of convenience, and this discrepancy must be addressed to avoid unintended consequences. 
  • De Fine Licht, J. (2014). Magic wand or Pandora’s Box? How transparency in decision making affects public perceptions of legitimacy. University of Gothenburg.
    • This dissertation identifies four main mechanisms that might explain positive effects of transparency on public acceptance and trust: that transparency enhances policy decisions, which indirectly makes people more trusting; that transparency is generally perceived to be fairer than secrecy; that transparency increases public understanding of decisions and decision makers; and that transparency increases the public feelings of accountability. The dissertation builds on five scenario-based experiments, with each study manipulating different degrees and versions of transparency for individual policy level decisions. The dissertation concludes that transparency might have the power to increase public perceptions of legitimacy, but also that the effect is more complex than often presumed. 
  • De Fine Licht, K., & de Fine Licht, J. (2020). Artificial intelligence, transparency, and public decision-making. AI & Society, 35, 917-926. https://doi.org/10.1007/s00146-020-00960-w
    • This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. The paper argues that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harm full transparency would bring.
  • De Laat, P. B. (2018). Algorithmic decision-making based on machine learning from Big Data: Can transparency restore accountability? Philosophy & Technology, 31(4), 525-541. https://doi.org/10.1007/s13347-017-0293-z
    • The author of this paper takes a comprehensive approach to understanding the limitations of transparency, including reasons for its potential impracticality. Full transparency implies exposing sensitive data and creating a potential route for users to exploit the system; for example, loan applicants modifying their features to achieve more favorable credit ratings. The author argues that there is a trade-off between accuracy and interpretability, and that reasonable decreases in accuracy are justified when achieving interpretability. The paper concludes that only oversight bodies should have access to full algorithmic transparency in order to avoid privacy concerns and to protect competition in the private sector.
  • Diakopoulos, N., & Koliska, M. (2017).* Algorithmic transparency in the news media. Digital Journalism, 5(7), 809-828.
    • This research presents a focus group study that engaged 50 participants across the news media and academia to discuss case studies of algorithms in news production and elucidate factors that are amenable to disclosure. The results indicate numerous opportunities to disclose information about an algorithmic system across layers such as the data, model, inference, and interface. The authors argue that the findings underscore the deeply entwined roles of human actors in such systems as well as challenges to adoption of algorithmic transparency, including the dearth of incentives for organizations and the concern for overwhelming end-users with a surfeit of transparency information.
  • Diakopoulos, N. (2015).* Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398-415.
    • This paper studies the notion of algorithmic accountability reporting as a mechanism for elucidating and articulating the power structures, biases, and influences that computational artifacts exercise in society. The paper proffers a framework for algorithmic power based on autonomous decision-making and motivates specific questions about algorithmic influence. The article analyzes five cases of algorithmic accountability reporting involving the use of reverse engineering methods in journalism to provide insight into the method and its application in a journalism context. 
  • Eshete, B. (2021). Making machine learning trustworthy. Science, 373(6556), 743-744.
    • This paper reviews some of the major hurdles in building transparent machine learning models that are ready for deployment. The authors describe adversarial attacks (data poisoning and inputs designed with the intention to fool deployed models), privacy-motivated attacks (attacks that are aimed at revealing sensitive data present in training data), and ultimately proceed to outline the challenges that make these issues particularly difficult to deal with. They also emphasize the need to develop clearer norms for robustness, fairness and transparency that can act as a goal post for the research community, as well as policy makers. 
  • Eslami, M., et al. (2019). User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–14). Association for Computing Machinery.
    • This paper focuses on a specific case study in which the algorithmic opacity of the Yelp review filtering mechanism was revealed to users writing reviews. Reactions were split into two groups of thought: challengers and defenders. The number of users questioning the existence and operation of the algorithm outnumbered those who defended it. Furthermore, the defense of the algorithm is explained by the level of user engagement and the impact the algorithm has on the user’s life. As users were made aware of the algorithm’s existence and its inner workings, some wanted to leave the platform altogether due to perceived deception.
  • Fenster, M. (2015). Transparency in search of a theory. European Journal of Social Theory, 18(2), 150-167.
    • This article argues that transparency is best understood as a theory of communication that excessively simplifies and thus is blind to the complexities of the contemporary state, government information, and the public. Taking them fully into account, the article argues, should lead us to question the state’s ability to control information, which in turn should make us question not only the improbability of the state making itself visible, but also the improbability of the state keeping itself secret.
  • Flyverbom, M. (2019). The digital prism. Cambridge University Press.
    • This book shows how the management of our digital footprints, visibilities and attention is a central force in the digital transformation of societies and politics. Seen through the prism of digital technologies and data, the lives of people and workings of organizations take new shapes in our understanding. In order to make sense of these, the book argues that we push beyond common ways of thinking about transparency and surveillance and look at how managing visibility is a central but overlooked phenomenon that influences how people live, how organizations work, and how societies and politics operate. 
  • Fox, J. (2007).* The uncertain relationship between transparency and accountability. Development in Practice, 17(4-5), 663-671.
    • This article questions the widely held assumption that transparency is supposed to generate accountability. It argues that transparency mobilizes the power of shame, yet the shameless may not be vulnerable to public exposure; truth often fails to lead to justice. After exploring different definitions and dimensions of the two ideas, the article instead focuses on the question of what kinds of transparency lead to what kinds of accountability, and under what conditions? It concludes by proposing that the concept can be unpacked in terms of two distinct variants; transparency can be either ‘clear’ or ‘opaque’, while accountability can be either ‘soft’ or ‘hard’.
  • Fung, A., et al. (2007).* Full disclosure: The perils and promise of transparency. Cambridge University Press.
    • Based on a comparative analysis of eighteen major targeted transparency policies, the authors suggest that transparency policies often produce information that is incomplete, incomprehensible, or irrelevant to the consumers, investors, workers, and community residents who could benefit from them. The authors present that transparency sometimes fails because those who are threatened by it form political coalitions to limit or distort information. The authors argue that to be successful, transparency policies must place the needs of ordinary citizens at center stage and produce information that informs their everyday choices.
  • Garfinkel, S., et al. (2017). Toward algorithmic transparency and accountability. Communications of the ACM, 60(9), 5. https://doi.org/10.1145/3125780
    • This letter lays out seven principles for ensuring fairness in an evolving ecosystem where decisions are increasingly outsourced to algorithms. It aims to enable the self-regularization of organizations as well outside regulation by policy makers by setting a standard for deployed automated decision systems. This also serves as a guideline for engineers designing new systems to ensure they are explainable and auditable.
  • Hansen, H. (2015). Numerical operations, transparency illusions and the datafication of governance. European Journal of Social Theory, 18(2), 203–220.
    • This article analyzes the forms of transparency produced by the use of numbers in social life. It examines what it is about numbers that often makes their ‘truth claims’ so powerful, investigates the role that numerical operations play in the production of retrospective, real-time and anticipatory forms of transparency in contemporary politics and economic transactions, and discusses some of the implications resulting from the increasingly abstract and machine-driven use of numbers. It argues that the forms of transparency generated by machine-driven numerical operations open up for individual and collective practices in ways that are intimately linked to precautionary and pre-emptive aspirations and interventions characteristic of contemporary governance.
  • Hood, C. (2010). Accountability and transparency: Siamese twins, matching parts, awkward couple? West European Politics, 33, 989–1009.
    • This paper contrasts three possible ways of thinking about the relationship between accountability and transparency as principles of governance: as ‘Siamese twins’ that are indistinguishable; as ‘matching parts’ that are separable but nevertheless complement one another smoothly to produce good governance; and as an ‘awkward couple’, involving elements that are potentially or actually in tension with one another. It then identifies three possible ways in which we could establish the accuracy or plausibility of each of those three characterizations. 
  • Jesus, S., et al. (2021, March). How can I choose an explainer? An application-grounded evaluation of post-hoc explanations. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 805-815).
    • This paper presents the X AI Test, an evaluation methodology for assessing the impact of providing different levels of model explanations (Data only, Data + Model score and Data + Model Score + Explanations) to end users in decision making tasks. The results, obtained on a real-world fraud detection task, reveal that explanations provided by popular X AI methods could have a worse impact than might be presumed: while model explanations improved downstream accuracy over providing just data and model scores, end users performed the best when provided only data and no explanations. 
  • Kizilcec, R. (2016). How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 2390–2395). Association for Computing Machinery. https://doi.org/10.1145/2858036.2858402
    • This work conducts a study to understand the relationship between user trust in an interface and three different levels of transparency. For the specific task in question of peer assessment, trust in the system is reduced when a user’s received score was lower than their expectations. As the review process and score justifications are made more transparent, trust is recovered. There are, however, diminishing returns as too much justification resulted in lower trust. Lastly, user trust is unaffected when expectations are met, suggesting a confirmation bias and a need for transparency only when there is a discrepancy between expectations and reality.
  • Koene, A., et al. (2019). A governance framework for algorithmic accountability and transparency. European Parliamentary Research Service. https://doi.org/10.2861/59990
    • This report recognizes the role that algorithms play in enabling high-throughput and fast decisions, as well as their ability to process quantities of data that are beyond human comprehension. It also raises awareness that, in high-stakes settings such as the deployment of autonomous vehicles, auditing and accountability are crucial in limiting significant health and safety risks. To limit the concern that machine learning systems are designed without the consequence of prediction in mind, the authors review current literature and propose four policy options designed to comprehensively address the need for transparency.
  • Kroll, J. A. (2021). Outlining traceability: A principle for operationalizing accountability in computing systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 758-771).
    • This article discusses how the principle of traceability can help achieve accountability and transparency goals in computer systems. Traceability is about establishing how and why a system was created, in a way that accounts for the behaviour of computer systems. This article explores how the principle of traceability has been discussed in AI principles and other policy documents and from these distills a set of software system requirements that can help serve accountability and transparency goals. 
  • Matthews, J. (2020). Patterns and anti-patterns, principles and pitfalls: Accountability and transparency in AI. AI Magazine, 41(1), 82-89.
    • This article starts with a review of some of the principles outlined by the Association for Computing Machinery’s US and European Public Policy Council’s 2017 statement on principles for algorithmic transparency and accountability. It then proceeds to list a set of common antipatterns that plague contemporary deployed machine learning models. The author also makes suggestions aimed at mitigating these harmful trends. The suggestions include emphasizing the distinction between (past) training data and (present) deployment conditions, identifying the mechanisms and incentives for identifying flaws in deployed models and encouraging research in transparent machine learning. 
  • Meijer, A., et al. (2014). Transparency. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford Handbook of Public Accountability. Oxford University Press.
    • This chapter opens up the “black box” of the relation between transparency and accountability by examining the expanding body of literature on government transparency. Three theoretical relations between transparency and accountability are identified: transparency facilitates horizontal accountability; transparency strengthens vertical accountability; and transparency reduces the need for accountability. Reviewing studies into the relation between transparency and accountability, this chapter argues that under certain conditions and in certain situations, transparency may contribute to accountability: transparency facilitates accountability when it actually presents a significant increase in the available information, when there are actors capable of processing the information, and when exposure has a direct or indirect impact on the government or public agency.
  • Mitchell, M., et al. (2019). Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency (pp. 220-229).
    • This paper proposes a framework – named Model Cards – to encourage transparent model reporting to prevent machine learning models from being used in contexts for which they’re unsuitable. Model Cards are concise documents providing insights into the training and evaluation procedures, training data, intended use cases and any other information relevant to the model  being considered for being deployed in a real-life scenario. The paper proceeds to give two instantiations of Model Cards – one about facial recognition, and one about toxic comment detection. 
  • Mittelstadt, B. D., et al. (2016).* The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679
    • This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. Finally, it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
  • Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 429-435).
    • The paper aims to understand the real-world impact of algorithmic audits on increasing algorithmic fairness and transparency in commercial systems. It does so through investigation of the commercial impact of Gender Shades, the first algorithmic audit of gender and skin type performance disparities in commercial facial analysis models. The study found that all three targets released new API versions within 7 months of the original audit, which led to significant reductions in accuracy disparities between males and females and darker and lighter-skinned subgroups.
  • Raji, I. D., et al. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 33-44).
    • This paper introduces a framework for algorithmic auditing – Scoping, Mapping, Artifact Collection, Testing and Reflection (SMACTR) – aimed at helping practitioners identify harmful repercussions of the AI systems they are developing prior to and during deployment, and to assess the fitness of decisions made throughout the development life cycle. The stages of the audit (as represented in the acronym SMACTR) yield a set of documents that together form an overall audit report, which can be used to close the accountability gap in the development and deployment of large-scale AI systems. 
  • Springer, A., & Whittaker, S. (2020). Progressive disclosure: When, why, and how do users want algorithmic transparency information? ACM Transactions on Interactive Intelligent Systems, 10(4), 1–32. https://doi.org/10.1145/3374218
    • This article investigates the effects of making algorithmic decisions more transparent to determine how users react to them. The authors demonstrate that complete transparency is not always beneficial, particularly when users are made aware of errors in a way that undermines their positive perception of the system’s accuracy. Additionally, the experiments demonstrate that user perceptions of a system that provides detailed feedback and one that does not can be quite different, even if the two systems are functionally the same. 
  • Turilli, M., & Floridi, L. (2009).* The ethics of information transparency. Ethics and Information Technology, 11(2), 105-112.
    • The paper argues that transparency is not itself an ethical principle, but a pro-ethical condition for enabling or impairing other ethical practices or principles, offering a new definition of transparency in order to take into account the dynamics of information production and the differences between data and information. The paper further defines the concepts of “heterogeneous organization” and “autonomous computational artefact” to clarify the ethical implications of the technology used in implementing information transparency. It argues that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations to support their ethical standing.
  • Watson, H., & Nations, C. (2019). Addressing the growing need for algorithmic transparency. Communications of the Association for Information Systems, 45, 488–510. https://doi.org/10.17705/1CAIS.04526
    • This paper examines the privacy/convenience trade-off that has occurred due to the collection of personal data used to train algorithms which make personalized recommendations. The authors differentiate between three types of recommendations based on their level of perceived user “creepiness,” with recommendations such as movie suggestions being deemed helpful and social influencing a user’s world view being deemed ethically wrong. The paper also references other important works which have shown that, although algorithms can streamline decision making, they can also increase inequality and even threaten democracy.
  • Webb, H., et al. (2019). It would be pretty immoral to choose a random algorithm: Opening up algorithmic interpretability and transparency. Journal of Information, Communication & Ethics in Society (Online), 17(2), 210–228. https://doi.org/10.1108/JICES-11-2018-0092
    • This study revolves around the task of matching students to preferred courses based on utility values they provide for each course. Algorithms were trained using different utility maximization criteria, and students were asked to choose a least and most preferred algorithm, as well as to provide an explanation for their choices. Two different variations of this experiment were run: one where the explanations given by the algorithm were just numerical summaries of the utilities attained by each algorithm, and another where additional written explanations of the optimization criteria used for each algorithm was also provided. There was no consensus among the study participants regarding the best and worst algorithms, and even between the two versions of the experiments, participants would sometimes change their answers even though nothing changed about the underlying algorithms.
  • Westbrook, L., et al. (2019). Real-time data-driven technologies: Transparency and fairness of automated decision-making processes governed by intricate algorithms. Contemporary Readings in Law and Social Justice, 11(1), 45-50.
    • This paper employs recent research results covering real-time data-driven technologies to perform an analysis and make estimates regarding the percentage of Facebook users who say they think users have no/a little/a lot of control over the content that appears in their newsfeed and the percentage of social media users who say it is acceptable for social media sites to use data about them and their online activities to recommend events in their area/recommend someone they might want to know/show them ads for products and services/show them messages from political campaigns (by age group). This research paper uses structural equation modeling to analyze the collected data.
  • Zerilli, J., et al. (2019). Transparency in algorithmic and human decision-making: Is there a double standard? Philosophy & Technology, 32, 661–683. https://doi.org/10.1007/s13347-018-0330-6
    • This paper reviews evidence demonstrating that much human decision-making is fraught with transparency problems, shows in what respects AI fares little worse or better, and argues that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The article asserts that demands of practical reason require the justification of action to be pitched at the level of practical reason, and decision tools that support or supplant practical reasoning should not be expected to aim higher than this. This paper casts this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argues that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form.
  • Zhou, Y., & Kantarcioglu, M. (2020). On transparency of machine learning models: A position paper. In AI for Social Good Workshop. Harvard University Center for Research on Computation and Society.
    • This paper argues that machine learning model transparency should be pursued in two somewhat orthogonal directions: one targeted toward producing human-readable justifications of the decisions made by models, and the other in terms of population-level statistics that quantitatively measure how private, reliable, and fair the decisions made by the models are. The authors then elaborate on how some of the aforementioned concepts have been operationalized by the machine learning community and outline some recent lines of attack to improve the transparency of models.

Chapter 11. Responsibility and Artificial Intelligence (Virginia Dignum)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.12

  • Amershi, S., et al. (2019). Guidelines for human-AI interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–13. 
    • The authors propose 18 design guidelines for human-AI interaction that are validated through empirical evaluations, including a user study with 49 design practitioners. The user study verified the relevance of the proposed guidelines and revealed their limitations and opportunities for further research.
  • Ashrafian, H. (2015). Artificial intelligence and robot responsibilities: Innovating beyond rights. Science and Engineering Ethics, 21(2), 317-326. https://doi.org/10.1007/s11948-014-9541-0
    • The author aims to examine AI rights beyond the context of commensurate responsibilities and duties using philosophical perspectives. Comparisons to arguments surrounding the moral rights of animals are made. AI rights are also analyzed in regard to legal principles. The author argues that core tenants of humanity should be promoted in the development of AI rights.
  • Askell, A., et al. (2019). The role of cooperation in responsible AI development. Unpublished Manuscript.
    • The authors argue that responsible and safe AI development requires collaboration between companies, because the intense competition pressure incentives AI companies to underinvest in safety. The authors then analyze several key factors to improve cooperation and use that to identify strategies for responsible development of AI. 
  • Bandy, J. (2021). Problematic machine behavior: A systematic literature review of algorithm audits. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-34.
    • The author provides a review of 500 English articles spanning 62 algorithmic audit studies. They describe how these studies have captured problematic behavior and areas that require future research attention. Finally, the author describes ingredients for a successful algorithmic audit in the future.
  • Baum, S. D. (2020). Social choice ethics in artificial intelligence. AI & Society, 35(1), 165–176. 
    • The author shows that the social choice approach to the ethics of AI has a weak normative basis because there is no single aggregate ethical view of society. The author proposes to instead focus on three sets of decisions: standing (whose ethics views to include), measurement (how to identify their views), and aggregation (how to combine individual views into a single view). The author details why those decisions have major consequences for AI behavior and should be considered in the initial AI design. 
  • Boden, M., et al. (2017).* Principles of robotics: Regulating robots in the real world. Connection Science, 29(2), 124–129. https://doi.org/10.1080/09540091.2016.1271400
    • The authors outline a framework of five ethical principles and seven high level messages for responsible robotics.
  • Brożek, B., & Jakubiec, M. (2017). On the legal responsibility of autonomous machines. Artificial Intelligence and Law, 25(3), 293-304. https://doi.org/10.1007/s10506-017-9207-8
    • The authors examine the question of whether autonomous machines can be seen as agents who have legal responsibility. They argue that although possible, these machines should not be granted the status of legal agents, at least at their current stage of development.
  • Chockler, H., & Halpern, J. Y. (2004). Responsibility and blame: A structural-model approach. Journal of Artificial Intelligence Research, 22(1), 93-115. https://www.aaai.org/Papers/JAIR/Vol22/JAIR-2204.pdf
    • The authors argue for the extension of the definition of causality to include the notion of degree of responsibility. They outline the concept of degree of blame, which accounts for the epistemic state of a given agent in a causal chain. They argue that degree of responsibility can act as a rough indicator for degree of blame.
  • Christian, B. (2020). The alignment problem: Machine learning and human values. WW Norton & Company.
    • The author investigates the problem of how to ensure machine learning systems stay aligned with human values based on hundreds of interviews and conversations the author had with researchers in the field. The book is structured by different challenges of the alignment problem. 
  • Christiano, P. F., et al. (2017). Deep reinforcement learning from human preferences. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17)
    • The authors propose a reinforcement learning algorithm that can solve complex tasks by learning from human preferences. This removes the need to access reward functions and opens up possibilities to learn tasks that do not have an explicit, simple reward function. The ability to learn direction from human preferences would also improve the alignment between AI agent’s behavior and human values and preferences. 
  • Cranefield, S., et al. (2018). Accountability for practical reasoning agents. In International Conference on Agreement Technologies (pp. 33-48). Springer. https://doi.org/10.1007/978-3-030-17294-7_3
    • The authors begin by discussing the concept of “accountable autonomy” in light of the rise of practical reasoning AI, considering research from a range of fields including public policy, health, and management to clarify the term. The authors move on to provide a list of requirements for accountable autonomous agents and provide potential research questions that could result from these requirements. They conclude by proposing the formulation of responsibility as a new core feature of accountability. 
  • Dignum, V. (2017).* Responsible autonomy. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI’2017) (pp. 4698–4704). https://doi.org/10.24963/ijcai.2017/655
    • The author discusses leading ethical theories for ensuring ethical behavior by artificial intelligence systems and proposes alternatives to the traditional methods. The author argues that there must be methodologies employed to uncover values of both designers and stakeholders in order to create understanding and trust for AI systems.
  • Dignum, V. (2018).* Ethics in artificial intelligence: introduction to the special issue. Ethics and Information Technology, 20, 1–3. https://doi.org/10.1007/s10676-018-9450-z
    • This introduction provides an overview on the ethical impact of artificial intelligence, briefly summarizing the aims of the papers contained in the special issue.
  • Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer International Publishing.
    • The author considers the implications of AI’s rise in traditional social structures, including issues of integrity surrounding those who build and operate AI. They also provide an overview of related work and further reading in the field of ethical issues in modern algorithmic systems.
  • Dodig-Crnkovic, G., & Persson, D. (2008). Sharing moral responsibility with robots: A pragmatic approach. In P. K. Holst & P. Funk (Eds.), Frontiers in Artificial Intelligence and Applications Volume 173. IOS Press Books. https://doi.org/10.3233/978-1-58603-867-0-165
    • The authors outline an approach to roboethics that argues for the moral responsibility of AI as a pragmatic, social regulatory mechanism. Given that individual artificial intelligences perform tasks differently, they can in some sense be responsible for outcomes. The authors argue that the development of this social regulatory mechanism requires ethical training for engineers as well as democratic debate on what is best for society.
  • Eisenhardt, K. M. (1989).* Agency theory: An assessment and review. The Academy of Management Review, 14(1), 57–74. http://www.jstor.org/stable/258191?origin=JSTOR-pdf
    • The authors provide a definition and analysis of agency theory. Eisenhardt makes two conclusions. First, that agency theory provides insight into information systems, outcome uncertainty, incentives, and risk. Second, that agency theory has empirical value, especially when used with complementary perspectives. The author recommends that agency theory be used to combat problems stemming from cooperative structures.
  • Fern, A., et al. (2014). A decision-theoretic model of assistance. Journal of Artificial Intelligence Research, 50, 71–104.  
    • The authors formulate the problem of intelligence assistance in a decision-theoretic framework, and propose several models for their problem formulation. They also present some theoretical analysis and empirical evaluations.
  • Floridi, L. (2016).* Should we be afraid of AI? Aeon Essays.
    • This essay addresses concerns expressed by tech CEOs and consumers alike, that the development of super-intelligent AI could spell disaster for the human race. Current reality is much more trivial, with AI merely absorbing what is put in by humans. The author argues that we need to focus on concrete problems with AI, rather than sci-fi scenarios.
  • Floridi, L., & Sanders, J. (2004).* On the morality of artificial agents. Minds and Machines, 14(3) 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
    • The authors offer a definition of the term agent, and highlight the concerns and responsibilities attributed to different types of agents, particularly artificial agents. They conclude by arguing that there is room in computer ethics for the concept of a moral agent that lacks free will, mental states, and/or responsibility.
  • Floridi, L., et al. (2018).* AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689-707. https://doi.org/10.1007/s11023-018-9482-5
    • This article discusses the findings of AI4People, a study which aimed to lay the foundations for a Good AI society. The authors introduce core opportunities and drawbacks for AI society, laying out five ethical principles that should be considered in AI development. They also offer 20 recommendations for the assessment, development, and incentivizing the creation of good AI. 
  • Gotterbarn, D. W., et al. (2018).* ACM code of ethics: A guide for positive action. Communications of the ACM, 61(1), 121-128.
    • The authors provide the first update on the Association for Computing Machinery’s code of ethics since 2003, incorporating feedback from emails, focus groups, and workshops. This update is significant, as some principles from the 2003 version were removed entirely, and new principles were added.
  • Hadfield-Menell, D., et al. (2016). Cooperative inverse reinforcement learning. Advances in Neural Information Processing Systems, 29.
    • The authors formally define the problem of value alignment as cooperative inverse reinforcement learning (CIRL), which is a cooperative, partial-information game between human and robot, both incentivized to maximize human’s reward even though the robot initially does not know what that is. The difference between CIRL and classical Inverse Reinforcement Learning (IRL) is that in IRL, the human is assumed to act optimally in isolation while not so in CIRL.
  • Leikas, J., et al. (2019). Ethical framework for designing autonomous intelligent systems. Journal of Open Innovation: Technology, Market, and Complexity, 5(1), 18. https://doi.org/10.3390/joitmc5010018
    • The authors review existing ethical principles and analyze them in terms of their application to artificial intelligence. They then present an original ethical framework for AI design.
  • Examining the black box: Tools for assessing algorithmic systems. (2020). Ada Lovelace Institute & DataKind UK.
    • This report describes terms and algorithms for assessing algorithms for societal impact, as well as regulatory and normative compliance. Based on literature reviews across different disciplines, it provides details on two broad tools for assessing algorithms systems: algorithm audits and algorithmic impact assessments. It discusses the merits and contexts in which different assessment tools are helpful.
  • Pelea, C. I. (2019). The relationship between artificial intelligence, human communication and ethics. A futuristic perspective: Utopia or dystopia? Media Literacy and Academic Research, 2(1), 38-48.
    • The author examines the question of whether and to what extent our social parameters of communication will need to be re-drawn because of the rise of artificial intelligence. The author first discusses how humans and AI communicate on an individual level, then investigates the collective social anxiety surrounding the rise of AI and the ethical dilemmas this creates. The author argues that it is vital that we undertake the challenge of creating a culture of social responsibility surrounding AI.
  • Rakova, B., et al. (2021). Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-23.
    • The authors offer a framework for analyzing how organizational culture impacts the effectiveness of AI initiatives in practice. It is based on interviews with practitioners in industry. They discuss structures that support or hinder responsible AI initiatives and structures that would enable effective practices in the future, such as well-integrated organizational tools for large-scale responsible AI evaluations.
  • Russell, S., & Norvig, P. (2009).* Artificial intelligence: A modern approach. 3rd. edition. Pearson Education.
    • The authors provide an introduction to the theory and practice of artificial intelligence that is comprehensive and up to date.
  • Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Penguin.
    • The author focuses on the fundamental problem of value alignment in AI systems and argues that the current standard model of AI research by default would produce unsafe AI systems that are misaligned with human values. The author then proposes and analyzes three principles to guide the development of beneficial AI that are aligned with human values.
  • Stone, P., et al. (2016).* Report of the 2015-2016 Study Panel. Stanford University.
    • The 2014-launched One Hundred Year Study on Artificial Intelligence aims to provide a long-term investigation into AI and its effect on social groups and society at large. This is the first study to come out of the project and discusses ways to frame the project in light of recent advances in AI technology, specifically in the public sector.
  • Saariluoma, P., & Leikas, J. (2019). Ethics in designing intelligent systems. International Conference on Human Interaction and Emerging Technologies, 1018, 47-52. Springer. https://doi.org/10.1007/978-3-030-25629-6_8
    • Hume’s guillotine, which argues that one can never derive values from facts, suggests that artificial intelligence systems can never be ethical, as they operate based on facts. The authors argue that Hume’s distinction between facts and values is not well founded, as ethical systems are composed of rules meant to guide actions, which act as a combination of both facts and values. While machines can be built to process ethical information, the authors argue that human input is still vital at this point in time.  
  • Turiel, E. (2002).* The culture of morality: Social development, context, and conflict. Cambridge University Press.
    • The author challenges the common view that extreme individualism and a subsequent lack of community involvement are responsible for the moral crisis in American society, drawing on research from developmental psychology, anthropology, and sociology. The author argues that each subsequent generation has attributed decline in society to the actions of young people.
  • Ziegler, D. M., et al. (2020). Fine-tuning language models from human preferences. Unpublished manuscript. 
    • The authors fine-tuned large language models based on human preferences for the task of continuing text with positive sentiment and summarization. Their analysis revealed and explained the underlying human preferences that resulted in the language models’ preferences in solving those tasks.

Chapter 12. The Concept of Handoff as a Model for Ethical Analysis and Design (Deirdre K. Mulligan and Helen Nissenbaum)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.15

  • Akrich, M., & Latour, B. (1992).* A summary of a convenient vocabulary for the semiotics of human and nonhuman assemblies. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change (pp. 259–264). MIT Press.
    • Structured as a dictionary list illuminated by examples, this article provides a comprehensive semiotic vocabulary for engagement with the topic of human and non-human assemblies. The authors explore the continuum between human and non-human through the description of all as actants, placed into specific categories by framing paradigms. The authors emphasize the role of observer, context, and perspective in subjective understandings of object, relation, interaction, function, and purpose.  
  • Bansal, K., et al. (2019). HOList: An environment for machine learning of higher order logic theorem proving. In International Conference on Machine Learning (pp. 454-463). PMLR. http://proceedings.mlr.press/v97/bansal19a.html
    • This paper presents a machine learning oriented, open-source environment for higher-order theorem proving, as well as a neural network-based automated prover that is trained on a large-scale reinforcement learning system. The authors suggest a benchmark for machine reasoning in higher-order logic. The proposed benchmark includes purely neural network-based baselines that demonstrate strong automated reasoning capabilities, including premise selection from a relatively large and practically relevant corpus of theorems with varying complexity.
  • Barr, N., et al. (2015). The brain in your pocket: Evidence that smartphones are used to supplant thinking. Computers in Human Behavior, 48, 473–480. https://doi.org/10.1016/j.chb.2015.02.029
    • Examining a familiar but perhaps not fully understood example of task handoff, this paper discusses findings that people offload some thinking to technology. In order to adequately characterize human experience and cognition in the modern era, psychology must understand the meshing of mind and media to fully understand the ways in which handoff can take place. The authors consider three studies and find that those who think more intuitively and less analytically when given reasoning problems were more likely to rely on their Smartphones (i.e., extended mind) for information in their everyday lives. 
  • Borenstein, J., & Arkin, R. (2016). Robotic nudges: The ethics of engineering a more socially just human being. Science and Engineering Ethics22(1), 31–46. https://doi.org/10.1007/s11948-015-9636-2
    • This paper engages with the ethics of “nudge” interactions between human actors and autonomous agents, and whether it is permissible to design these machines to promote “socially just” tendencies in humans. Employing a Rawlsian “principles of justice” framework, the authors explore arguments for and against nudges more broadly, and act specifically to analyze whether robotic nudges are morally or practically different from other kinds of decision architecture. They also put forth ethical principles for those seeking to design such systems.
  • Brownsword, R. (2011).* Lost in translation: Legality, regulatory margins, and technological management. Berkeley Technology Law Journal26(3), 1321–1365. https://www.jstor.org/stable/24118672
    • This article discusses the role of regulation and the law in the translation from a traditional legal order (wherein participants can act in a multitude of ways but are normatively constrained by legal rules) to a “technologically managed” order (wherein individuals are restricted to certain actions by the nature of the technology used to carry out those actions). The topic is explored through the lenses of a shift on the part of the regulated party from “moral” to “prudential” motivations for action, and further a shift on the part of the regulation from normative to non-normative purpose. 
  • Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review103(3), 513–563. https://digitalcommons.law.uw.edu/faculty-articles/23
    • This article explores the potential implications of cyberlaw. Examining robotics as an “exceptional” technology with the potential to qualitatively and quantitatively shift socio-technical contexts, the author argues that the discipline of cyberlaw (developed in response to the similarly “exceptional” technology of the internet) provides essential insights for responding to the challenges that robots introduce.
  • Cohen, J. E. (2006). Pervasively distributed copyright enforcement. Georgetown Law Journal95(1), 1–48. https://scholarship.law.georgetown.edu/facpub/808
    • This article discusses the impact of strategies of “pervasively distributed copyright enforcement,” whereby intellectual property rights holders seek to embed intellectual property enforcement functions within foundational communications networks, protocols, and devices. The author characterizes these attempts as a “hybrid regime” that neither aligns with centralized authority nor with distributed internalized norms. The author explores the observed and potential impacts of this “hybrid regime” on networked society.
  • Coglianese, C., & Lehr, D. (2016). Regulating by robot: Administrative decision making in the machine-learning era. Georgetown Law Journal105(5), 1147–1224. https://scholarship.law.upenn.edu/faculty_scholarship/1734
    • This paper engages in critical legal and ethical analysis of the present and future role of machine learning algorithms in decision-making by administrative bodies. The authors examine constitutional and administrative law challenges to the role of autonomous agents in this context; the authors conclude that the use of such agents is likely to be legal but will only be ethical if certain important principles are adhered to.
  • Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society5, 40–60. https://doi.org/10.17351/ests2019.260
    • This paper explores the balance of ethical weight within sociotechnical systems through the concept of a “moral crumple zone.” This refers to human actors with ostensible authority (but little meaningful power) over a complex human-machine system who are set up to take disproportionate individual responsibility for failings in systemic structure and design. The author develops this concept by analyzing several high-profile accidents, their antecedent systemic structures, and the subsequent media portrayals of the actors involved. 
  • Flanagan, M., & Nissenbaum, H. (2014).* Values at play in digital games. MIT Press.
    • This book develops a theoretical and practical framework for critically identifying the moral and political values embedded within games. In framing a value-sensitive conception of digital games, the authors discuss how particular values can be incorporated within digital game design.
  • Friedman, B. (1996).* Value-sensitive design. Interactions3(6), 16–23. https://doi.org/10.1145/242485.242493
    • This article engages with the argument that values are always both embedded within and emergent from the ways in which tools are built and used. The authors advocate subsequently for principles of “value-sensitive design” wherein designers are explicitly called upon to engage actively and thoughtfully with these values and their implications. The topics of user autonomy and system bias are used as the primary case studies for exploring the concept. 
  • Friedman, B., et al. (2017).* A survey of value sensitive design methods. Foundations and Trends in Human-Computer Interaction11(2), 63–125. https://doi.org/10.1561/1100000015
    • This article comprises a broad theoretical and methodological discussion of “value sensitive design” alongside a specific survey of 14 different methods for actualizing the concept.  The authors seek to evaluate each method for its role and usefulness in engaging with a particular aspect of “value sensitive design” in practice, as well as to offer general insights about the core characteristics of the concept of “value sensitive design” overall. 
  • Giuffrida, I. (2019). Liability for AI decision-making: Some legal and ethical considerations. Fordham Law Review, 88(2), 439-456.
    • This article explores various legal implications in the use of AI, with a focus on liability risks. It discusses both the reliance on and delegation of tasks to AI. The article proposes a framework for addressing whether these liability challenges introduced by the use of AI require a new approach in light of the varying degrees of human involvement.
  • Hernández-Orallo, J., & Vold, K. (2019). AI extenders: The ethical and societal implications of humans cognitively extended by AI. In AIES ’19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 507–513). Association for Computing Machinery. https://doi.org/10.1145/3306618.3314238 
    • Observing that there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans, this paper considers that under the extended mind thesis, the functional contributions of these tools become essential to human cognition. This cognitive extension poses new philosophical, ethical, and technical challenges. To analyze these challenges, the authors define and place “AI extenders” on a continuum between fully externalized systems and fully internalized processes, where the extender becomes redundant within operations performed by the brain. Dissecting the cognitive capabilities that can foreseeably be extended by AI, and examining their potential ethical implications, the authors suggest that cognitive extenders using AI should be treated as distinct from other cognitive enhancers.
  • Huang, M.-H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research21(2), 155–172. https://doi.org/10.1177/1094670517752459  
    • Identifying categories of human-AI task handoff, this paper presents a theory of AI-human job replacement. The theory specifies four intelligences required for service tasks (mechanical, analytical, intuitive, and empathetic) and lays out ways that firms could decide how to assign specific tasks to humans and/or machines. The authors state that AI is developing in a predictable order, with mechanical task capacity mostly preceding analytical task capacity, analytical mostly preceding intuitive, and intuitive mostly preceding empathetic intelligence contexts. AI first replaces some of a service job’s tasks, a transition stage seen in terms of augmentation, and then progresses in some cases to replace human labor entirely. Implications of this theory point to AI replacement of humans in certain tasks, with other tasks becoming sites of innovative human–machine integration. 
  • Joh, E. E. (2016). Policing police robots. UCLA Law Review Discourse64, 516–543. https://www.uclalawreview.org/policing-police-robots/
    • This paper examines the potential impacts of artificially intelligent robots on policing through legal and ethical lenses. The author analyzes arguments in favor of and against the adoption of robots by police agencies, arguing that these case studies raise deeper questions about police decision-making that have not yet been systematically or effectively addressed. The author explains how task handoff by law enforcement raises different handoff considerations when compared to other technologies. 
  • Kroll, J. A. (2021). Outlining traceability: A principle for operationalizing accountability in computing systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 758-771). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445937
    • This article reframes existing discussions around traceability as a principle for operationalizing accountability in computing systems. Traceability requires establishing not only how a system works, but how and for what purpose it was created. Explaining why a system exhibits particular behaviors connects how a system was constructed to the broader goals of system governance in a way that highlights human understanding of a system’s mechanical operation and the decision processes underlying it. 
  • Lake, B. M., et al. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, 253https://doi.org/10.1017/S0140525X16001837
    • This paper suggests that in order to build machines that truly think and learn like people, developers must move beyond current engineering trends. Despite biological inspiration and performance achievements, the authors state, neural networks differ from human intelligence in crucial ways. The authors argue that learning systems should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn elements to rapidly acquire and generalize knowledge to new tasks and situations.  
  • Lappin, S., & Shieber, S. M. (2007). Machine learning theory and practice as a source of insight into universal grammar. Journal of Linguistics43, 393–427. https://doi.org/10.1017/S0022226707004628 
    • This paper examines whether and how machine learning approaches to natural language processing might provide specific insights into the nature of human language. The authors state that while it is uncontroversial that the learning of a natural language (or of anything else) requires assumptions concerning the structure of the phenomena being acquired, machine learning can have a role in demonstrating the viability of particular language models as learning mechanisms. To the extent that the bias of a successful model is defined by a comparatively weak set of language-specific conditions, the authors state, task-general machine learning methods might be drawn upon to explain the possibility of acquiring linguistic knowledge.
  • Latour, B. (1992).* Where are the missing masses? The sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change (pp. 225–258). MIT Press.
    • This chapter engages with the “technological determinism/social constructivism dichotomy” through the concept of the “actor network approach.” This approach seeks to emphasize the bidirectionality of the interactions between social actors and technological actors in sociotechnical systems, arguing that physical structure and design of the material world acts to shape and limit the boundaries of its social construction. With a focus upon “mundane artifacts,” the author explores the ways in which technologies act to influence the thoughts and decisions of human actors. 
  • Lessig, L. (2009).* Code: And other laws of cyberspace. Basic Books.
    • This book engages in a comprehensive discussion of the structure and regulation of the internet, with a focus upon the impact of the four forces of “Law, Norms, Market, and Architecture.” In particular, the author argues that the computer code which defines the structure and function of the internet acts to shape and regulate the conduct of its users in much the same way that traditional regulatory instruments such as legal codes do. 
  • Liu, J., et al. (2020). Time to transfer: Predicting and evaluating machine-human chatting handoff. arXiv:2012.07610v1 
    • Addressing the question of how easily a trained chatbot might replace a human agent in the case of human-algorithm task collaboration, this paper reports experimental results in which the efficacy of a proposed model upon Machine-Human Chatting Handoff is contrasted with a series of baseline models. The authors propose a Difficulty-Assisted Matching Inference network, utilizing difficulty-assisted encoding to enhance the representations of utterances. Further, a matching inference mechanism is introduced to capture contextual matching features. New datasets generated by this work point to future measurement of efficacy within the reverse-handoff task, or handoff from the human agent to the machine.
  • Lynn, L.A. (2019) Artificial intelligence systems for complex decision-making in acute care medicine: a review. Patient Safety in Surgery 13(6). https://doi.org/10.1186/s13037-019-0188-2
    • This article explores the trade-offs present in decision-making using artificial intelligence; when machines are handed off tasks in high stakes instances, there are many procedural and legal considerations to be made. The authors highlight the utility of AI pattern imaging analysis in acute care, and how medical education should transition and develop in its use of AI. 
  • Neff, G., & Nagy, P. (2016). Automation, algorithms, and politics | Talking to bots: Symbiotic agency and the case of Tay. International Journal of Communication10, 4915–4931. https://ijoc.org/index.php/ijoc/article/view/6277
    • This paper considers Tay, an experimental artificial intelligence chatbot that Microsoft launched in 2016. In Tay’s case, a group of organized users and a platform-specific culture turned code that functioned well in other contexts into an embarrassment for the designers who produced it; Tay learned from and echoed the obscene and inflammatory tweets that were fed into it. Using phenomenological research methods and pragmatic approaches to agency, the authors look at what users said about Tay to gauge how users imagine and interact with emerging technologies. This examination, the authors’ state, shows the limitations of current theories of agency for describing communication handoff in these settings. The authors argue that a perspective of “symbiotic agency,” informed by the imagined affordances of emerging technology, is required to understand human-algorithmic communication.
  • Radin, M. (2004).* Regulation by contract, regulation by machine. Journal of Institutional and Theoretical Economics160(1), 142–156. https://www.jstor.org/stable/40752447
    • The article concerns the impacts of mass standardized contracts and digital rights management systems on how property and contract law regulate intellectual property. The author examines the impacts of these technologies on the underlying knowledge-generation incentives of intellectual property, on the distinction between waivable rules and inalienable entitlements, and on the role of legislative approval of “regulation by machine.”
  • Radziwill, N., & Benton, M. (2017). Evaluating quality of chatbots and intelligent conversational agents. Software Quality Professional, 19(3), 25.
    • This paper provides an overview of the academic literature since 1990 and industry articles since 2015, that gather and articulate quality attributes for chatbots and conversational agents and synthesize quality assessment and assurance approaches. The authors propose and examine an Analytic Hierarchy Process (AHP) as a structured approach for navigating complex decision-making processes that involve both qualitative and quantitative considerations.
  • Schaub, G., Jr. (2019). Controlling the autonomous warrior: Institutional and agent-based approaches to future air power. Journal of International Humanitarian Legal Studies10(1), 184–202. https://doi.org/10.1163/18781527-01001007
    • Working through both institution-centric and agent-centric lenses, this article engages with the legal and ethical challenges posed by the handoff of lethal power to increasingly autonomous weapons systems. The author argues that artificial intelligence is not unprecedented in its ability to change the structure of warfare and contends that past work in understanding the ethical and legal relationships between principals and agents may be effectively adapted to characterizing and addressing these new challenges.
  • Shilton, K., et al. (2014).* How to see values in social computing: Methods for studying values dimensions. In CSCW ’14: Computer Supported Cooperative Work and Social Computing (pp. 426–435). https://terpconnect.umd.edu/~kshilton/pdf/ShiltonCSCW2014preprint.pdf
    • This article presents a framework for understanding the nature and role of values in sociotechnical systems. The authors advocate for the theoretical characterization of values upon a system of “source dimensions” (describing the origins of values) and “attribute dimensions” (describing the traits of values). In relation to this framework, the authors examine the effectiveness of different lenses, such as ethnographies and content analyses, by which to study values in social computing.
  • Surden, H. (2007).* Structural rights in privacy. SMU Law Review60(4), 1605-1632. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1004675
    • This paper asserts that privacy rights are not regulated explicitly by the law, but rather are implicitly and primarily regulated by the presence of latent structural constraints that impose transaction costs upon the violation of privacy. Substantial components of privacy become vulnerable, the author states, as technology acts to reduce the magnitude of these structural constraints; the author suggests a conceptual framework for identifying and responding to specific contexts of such vulnerability.
  • Susser, D., et al. (2019). Technology, autonomy, and manipulation. Internet Policy Review8(2). https://www.doi.org/10.14763/2019.2.1410
    • This article explores the “online manipulation” that is alleged to occur when powerful technology companies use algorithms to shape online experiences. The authors argue that such practices may be harmful both consequentially (in their impacts on the ethical and economic interests of users and society at large), and deontologically (indirectly threatening individual autonomy), as they aim to evoke specific behaviors in the user. The authors situate their discussion within examination of the Cambridge Analytica and Facebook scandal, and within the broader issue of election manipulation.
  • Umbrello, S., & De Bellis, A. F. (2018). A value-sensitive design approach to intelligent agents. In R. Yampolskiy (Ed.), Artificial Intelligence Safety and Security (pp. 395–410). CRC Press.
    • This chapter discusses the methodology of “value-sensitive design” and its implications for the design and implementation of artificially intelligent systems. In seeking to identify opportunities and limits in adapting value-sensitive design to the specific challenge of working with AI, the authors argue that value sensitivity must be proactively embedded throughout the entire AI development process.
  • Wang, D., et al. (2021). How much automation does a data scientist want? arXiv:210103970v1
    • This paper documents an IBM research team’s findings. The team proposed a human-in-the-loop AutoML framework with four dimensions: roles, stages, levels of automation, and types of explanation and used the framework to design a large-scale online survey to gather usage perspectives from data science and machine learning practitioners. The authors discovered a notable gap between the automation level in people’s current work practice and the future automation level that they prefer. However, the authors state, such research and development efforts should be directed to meet the specific needs of various user personae, as the level of automation and the type of explanation can vary, depending, e.g., upon the user, which lifecycle stage the user works in, and what the task is. The authors, therefore, discourage a fully automated data science and machine learning focus, preferring a human-in-the-loop explainable system.
  • Winner, L. (1980).* Do artifacts have politics? Daedalus, 109(1), 121–136. https://www.jstor.org/stable/20024652
    • This article argues that as power relations are embodied within technologies, artifacts themselves are imbued with politics. In support of this thesis, the author discusses instances in which a specific technical device becomes a way of settling an issue in a particular community, and thereby acts to shape the power relations within that community. Secondly, the author contends that some technologies are inherently political in that they either require or are strongly compatible with certain kinds of political relationships.
  • Zerilli, J., et al. (2019). Algorithmic decision-making and the control problem. Minds and Machines29(4), 555–578. https://doi.org/10.1007/s11023-019-09513-7
    • This paper discusses the “control problem,” wherein it is difficult for human actors to maintain meaningful oversight and control of largely automated systems. The authors build on a body of industrial-organizational psychology work and extend the topic to modern algorithmic actors, offering both a theoretical framework for understanding the problem and a series of design principles for overcoming it in human-machine systems.

Chapter 13. Race and Gender (Timnit Gebru)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.16

  • Adams, R. (2021). Can artificial intelligence be decolonized? Interdisciplinary Science Reviews, 46(1-2), 176–197. https://doi.org/10.1080/03080188.2020.1840225
    • This paper argues that AI can be decolonized, however in order for this to be done, the ways in which AI perpetuates colonialism must be acknowledged and remedied. This includes how AI is built on Western notions of intelligence, ethics, and power. The paper further discusses how one of the main goals of AI is classifying people based on different characteristics, in what Adams calls ‘dividing practices’. This is very similar to how statistics were used in European colonies to divide people, allowing AI to perpetuate racism and colonialism.
  • Amrute, S. (2019). Of techno-ethics and techno-affects. Feminist Review, 123(1), 56–73. https://doi.org/10.1177/0141778919879744  
    • This article considers the current state of digital labor conditions and identity formation, including uneven geographies of race, gender, class, ability, and histories of colonialism and inequality. The author highlights specific cases in which digital labor frames embodied subjects and proposes new ways in which digital laborers might train themselves to be empowered to identify emergent ethical concerns, using the concept of attunement as a framework for care. Predictive policing, data mining, and algorithmic racism are discussed, as is the urgency to include digital laborers in the design and analysis of algorithmic technologies and platforms. 
  • Angwin, J., et al. (2016).* Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    • This investigative report documents and analyzes racial bias against black defendants from algorithmic criminal risk score systems, such as COMPAS, used by courts and parole boards in the United States to forecast future criminal behavior. The authors describe how the algorithmic formulas, and others like it, were written in a way that promotes racial disparity, resulting in black defendants being inaccurately identified as future criminals more frequently than white defendants. The report heavily implies that bias is inherent in all actuarial risk assessment instruments (ARAI), and that widespread audits and reassessments are necessary. 
  • Atanasoski, N., & Vora, K. (2019). ​Surrogate humanity: Race, robots, and the politics of technological futures​. Duke University Press. https://www.dukeupress.edu/Assets/PubMaterials/978-1-4780-0386-1_601.pdf
    • This book traces the ways in which robots, artificial intelligence, and other technologies serve as surrogates for human workers within a labor system defined by racial capitalism and patriarchy. The authors analyze technologies including sex robots, military drones, and sharing-economy platforms to illustrate how liberal structures of antiblackness, settler colonialism, and patriarchy are fundamental to human and machine interactions. Through a critical feminist and science and technology studies (STS) analysis of contemporary digital labor platforms, the authors address the global racial and gendered erasures underlying techno-utopian fantasies of a post-labor society and consider the definitions of what it means to be a human.
  • Benjamin, R. (2019).* Race after technology: Abolitionist tools for the New Jim Code. John Wiley & Sons. https://www.ruhabenjamin.com/race-after-technology
    • Using critical race theory, this book analyzes how current technologies can and have reinforced White supremacy and increased social inequalities. The author introduces the concept of The New Jim Code as a means of describing how a wide range of discriminatory designs can (a) encode inequity by amplifying racial hierarchies, (b) ignore and replicate social divisions, and (c) inadvertently reinforce racial biases while intending to fix them. This book concludes with an overview of conceptual strategies, including tech activism and abolitionists tools, that might be used to disrupt and rectify current and future technological design.  
  • Bolukbasi, T., et al. (2016).* Man is to computer programmer as woman is to homemaker? Debiasing word embeddings.  Advances in Neural Information Processing Systems, 29.  4349-4357. https://proceedings.neurips.cc/paper/2016/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html
    • This article examines the presence of gender bias within the popular framework of word embedding, which represents text data as vectors, used in many machine learning and natural language processing tasks. The authors found that gender bias and stereotyping, in line with greater societal bias, is common in many word embedding models, even those trained on large data sets, such as Google News articles. The article provides an algorithmic-based methodology for modifying embeddings to remove gender stereotypes, while maintaining desired associations. 
  • Broussard, M. (2018).* Artificial unintelligence: How computers misunderstand the world. MIT Press. https://doi.org/10.7551/mitpress/11022.001.0001
    • This book describes society’s relationship with technology in the contemporary moment, taking a critical stance on how much computers are relied upon for daily tasks. This reliance, the author states, has prompted an overproduction of poorly designed and harmful systems. Through a series of interactions with current technologies, such as driverless cars and machine learning models, the author defines limits for which technology should and should not be applied, arguing against the prevalent framework of technochauvinism, which upholds that technology is the solution to any and all problems. 
  • Buolamwini, J., & Gebru, T. (2018).* Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the First Conference on Fairness, Accountability and Transparency, 81. 77-91. http://proceedings.mlr.press/v81/buolamwini18a.html
    • This conference paper investigates race and gender discrimination in machine learning algorithms, presenting an approach to the evaluation of bias in automated facial analysis algorithms and datasets with respect to the identification of phenotypic subgroups. The authors conclude that the darker-skinned females within their datasets were the most misclassified group, indicating substantial disparities in the accuracies of classifying individuals with varying skin types. As the authors stress, such biases require immediate attention in order to ensure that fair, transparent, and accountable facial analysis algorithms are built into commercial technologies. 
  • Chun, W. H. K. (2009). Introduction: Race and/as technology; Or, how to do things to race. Camera Obscura, 70(24). https://doi.org/10.1215/02705346-2008-013
    • This article discusses the interconnections between race and technology, and the various ways in which race can be defined and operationalized through societal and cultural understandings. Framing the discussion in past and current critical theory, the author describes race as a technique that is carefully constructed through a historical understanding of tools, mediation, and framings, that build identity and history. In conclusion, the author states that in order to disrupt the concept of race, those concepts such as  nature/culture, privacy/publicity, self/collective, and media/society, need to be reframed as well. 
  • Dankwa-Mullan, I., et al. (2021). A proposed framework on integrating health equity and racial justice into the artificial intelligence development lifecycle. Journal of Health Care for the Poor and Underserved, 32(2), 300–312. doi:10.1353/hpu.2021.0065 
    • This paper outlines a framework for developing AI for healthcare tools in a way that minimizes racial biases, in the hopes that this will help promote health equity and racial justice. The framework focuses a lot of time into conversations with patients in order to ensure the needs of the target population are being met, the AI system is developed in a way that corrects algorithmic bias, the AI follows user-centered design justice principles, and it is continually monitored and updated once it has been deployed.
  • D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press.
    • This book presents principles for a feminist approach to data science. First, the authors propose an intersectional lens focused on the matrix of domination in order to analyze data science as a form of power relations. The authors argue that multiple forms of knowledge are needed for the field of data science to engage critically with the gender binary and other forms of classification. Furthermore, the book highlights that data is not neutral and objective and a feminist interpretation of it therefore requires a plurality of worldviews and a contextual analysis.  The book concludes by pointing to the often invisible labor needed to create, transform, and maintain data, and calling for these workers to receive more dignified treatment.
  • Eubanks, V. (2018).* Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. https://virginia-eubanks.com/books/
    • Considering the historic context of austerity, this book documents the use of digital technologies for distributional decision-making of social service delivery to poor and disadvantaged populations in the United States. Using ethnographic and interview methods, the author investigates the impact of automated systems such as Medicaid and Temporary Assistance for Needy Families, and electronic benefit transfer cards, stating that such systems, while expensive, are often less effective, and regularly reproduce and aggravate bias, equity disparities, and state surveillance of the poor. The author speaks to legacy system prejudice and the ‘social specs’ that underlie our decision-systems and data-sifting algorithms and offers a number of participatory design solutions, including empathy through co-design, transparency, access, and control of information. 
  • Gangadharan, S. P. (Ed.). (2014). Data and discrimination: Collected essays. Open Technology Institute, New America Foundation. https://www.newamerica.org/oti/data-and-discrimination/ 
    • This book brings together work from eighteen researchers from various backgrounds looking at discriminatory impacts of big data and algorithms. Three themes are discussed: 1. Discovering and responding to harms; 2. Participation, presence, and politics; and 3. Fairness, equity, and impact. Many of the authors in this collection remark that there is a gap in public awareness of the extent to which algorithms influence their daily lives. 
  • Gebru, T., et al. (2021). Datasheets for datasets. Communications of the ACM 64(12), 86-92. https://doi.org/10.1145/3458723 
    • This paper proposes datasheets to document the creation, use, and transformation of datasets for machine learning. One of the issues in the AI industry is that the origins of datasets are not documented, making it difficult to assess the ethics of how they have been collected. The authors of the paper hope that by documenting datasets using datasheets, AI practitioners will provide more transparency in the process of algorithmic development.
  • Hamidi, F., et al. (2018).* Gender recognition or gender reductionism?: The social implications of embedded gender recognition systems. In Proceedings of the ACM 2018 CHI Conference on Human Factors in Computing Systems, Montreal, Canada.  http://doi/10.1145/3173574.3173582
    • This article investigates the social implications of automatic gender recognition (AGR) and computational methods within the transgendered community. The authors interview thirteen transgendered individuals, including three technology designers, to document current perceptions and attitudes towards AGR. The article concludes that transgendered individuals have strong negative attitudes towards AGR, questioning whether it can be used to accurately identify their gender. Privacy and potential harms are discussed with respect to the impacts of being mis-identified, the authors include design recommendations to accommodate gender diversity.
  • Hamilton, A. M. (2020). A genealogy of critical race and digital studies: Past, present, and future. Sociology of Race and Ethnicity, 6(3), 292? –301. https://doi.org/10.1177/2332649220922577
    • In this literature review, the author retraces recent developments in critical race theory and digital studies. The author argues that internet companies and their products have taken a color-blind approach to racism, sexism, and other forms of discrimination. Furthermore, the author argues that early publications on digital studies have focused on a digital divide to account for access to technology. The review focuses on how a critical race approach to digital studies allows for the analysis of existing inequalities in technology that have been rendered invisible by previous color-blind approaches.
  • Hanna, A., et al. (2020). Towards a critical race methodology in algorithmic fairness.  Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 501-512.  https://doi.org/10.1145/3351095.3372826 
    • The authors argue that current algorithmic fairness frameworks treat race as a fixed attribute and fail to account for race as a socially constructed concept, which, in turn, minimizes the structural aspects of algorithmic unfairness. This work builds from critical race theory and sociology, and begins by overviewing the challenges and the history of categorizing race, and lessons from other disciplines. They conclude with suggestions of how to use race in algorithmic fairness research, including moving beyond group fairness approaches, revisiting how society operationalizes race, understanding how disaggregating analysis relates to the sociotechnical system, understanding the difference between a racism effect and a race effect, and centering the perspectives of marginalized groups. 
  • Hazirbas,  C., et al. (2021). Towards measuring fairness in AI: The casual conversations dataset. IEEE Transactions on Biometrics, Behavior, and Identity Science. https://doi.org/10.1109/TBIOM.2021.3132237 
    • This paper introduces the Casual Conversations Dataset, which consists of both audio and video from over 3000 participants from diverse ages, genders, and skin tones. The goal of the dataset is to test how well AI models perform on different ages, genders, apparent skin types, and under different lighting conditions. The paper uses this dataset to test the fairness of the top five models in the DeepFake Detection Challenge (DFDC) and found that all of the models had difficulty correctly detecting fake videos containing people with darker skin tones.
  • Hicks, M. (2017).* Programmed inequality: How Britain discarded women technologists and lost its edge in computing. MIT Press. http://programmedinequality.com/
    • This book describes the history of feminized and gendered labor practices within Britain’s computer industry. Drawing from government files, personal interviews, and archives from the central British computing companies, the author describes how the neglect of the female labor force contributed to the industry’s short run from 1944-1974. The book concludes by describing how gendered discrimination persists in the computing industry, leading to many women’s abandonment of the field, and compares the historic economic conditions in Britain to the current state of the industry in the United States. 
  • Jasanoff, S. (2004). Ordering knowledge, ordering society. In States of knowledge: the co-production of science and social order. 1–45. Routledge. 
    • These two chapters discuss co-production: the idea that the ways people represent nature and society are directly linked to how they live. It discusses how science in particular is not a neutral entity, but one embedded with the knowledge and biases of society. These chapters urge for more recognition of the link between science and society, and how this link helps to shape our societal and political structures.
  • Kiritchenko, S., & Mohammad, S. M. (2018). Examining gender and race bias in two hundred sentiment analysis systems. ArXiv. https://doi.org/10.48550/arXiv:1805.04508
    • This work builds the Equity Evaluation Corpus (EEC), a corpus of over 8000 English sentences used to evaluate biases towards race and gender. Specifically, the authors focus this work on sentiment analysis systems, which are automated algorithms that predict the intensity of emotion in a piece of text. They test over 200 sentiment analysis algorithms to identify whether they produce equally rated intensities for two sentences that differ only in the gender or race of a person mentioned. They found that the majority of submissions consistently displayed gender bias, and even more displayed racial bias.
  • Lewis, J. E., et al. (2018). Making kin with the machines. Journal of Design and Science​.https://doi.org/10.21428/bfafd97bh
    • This article considers artificial intelligence through diverse Indigenous epistemologies, reflecting on traditional ways of knowing and speaking that acknowledge kinship networks connecting humans and nonhuman entities. As the author states, Indigenous communities have retained language and protocols to enable dialogue with non-human kin (such as AI), encouraging intelligible discourses across different materials. Indigenous development environments (IDEs) are presented as a framework instituting Indigenous cultural values as fundamental aspects of all programming choices in order to instill greater public accountability into the design of AI systems.
  • Mehrabi, N., et al. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), (pp 1-35).  https://doi.org/10.1145/3457607 
    • This work surveys real-world applications of AI that have shown bias, different sources of bias in AI applications, and fairness definitions that ML researchers have used to measure bias in these algorithms. The authors overview discriminating systems such as COMPAS and job posting advertisement platforms; types of biases including data-to-algorithm, algorithm-to-user, and user-to-data; types and sources of discrimination such as direct and systemic; definitions of fairness such as equalized odds and equal opportunity; and fair machine learning methods such as unbiasing data and fair representation learning. The authors also focus on specific domains and subdomains within AI, outlining existing bias in state-of-the-art models and how researchers have attempted to address these challenges.
  • Noble, S. U. (2018).* Algorithms of oppression: How search engines reinforce racism. NYU Press. https://nyupress.org/9781479837243/algorithms-of-oppression/
    • This book discusses how search engines, such as Google, are embedded with racial and gender bias, challenging the notion that they are neutral algorithms acting outside of influence from their human engineers, and emphasizing the greater social impacts created through their design. Through an analysis of text and media searches, and research on paid advertising, the author argues that the monopoly status of a small group of companies alongside vested private interests in promoting some sites over others has led to biased search algorithms that privilege whiteness and exhibit bias against people of color, and women.
  • Obermeyer, Z., et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342 
    • This paper provides evidence of, and dissects the reasons for, racial bias in a specific algorithm used to determine enrollment in a high-risk care management program. The authors find that the algorithm’s prediction of health care costs preserves the historical inaccessibility of care for Black patients, as for a given risk level, Black patients suffer from significantly more chronic illnesses than white patients. They demonstrate that predicting active chronic conditions, instead of healthcare costs, increase the fraction of Black patients enrolled in the program by almost 13%. 
  • Oliva, T. D., et al. (2021). Fighting hate speech, silencing drag queens? Artificial intelligence in content moderation and risks to LGBTQ voices online. Sexuality & Culture, 25, 700-732. https://doi.org/10.1007/s12119-020-09790-w 
    • Many internet platforms are developing content moderation algorithms to identify and remove “toxic” content, or hate speech, from the platform. Grounded in queer linguistic studies, the authors examine how a commonly used content moderation algorithm addresses the use of “mock impoliteness” employed by LGBTQ populations. They found that the algorithm considered a number of tweets from prominent drag queens to have higher levels of toxicity than tweets by white nationalists, suggesting that the algorithm is unable to consider social context. They discuss how these algorithms may hinder the freedom of expression for these communities.
  • O’Neil, C. (2016).* Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group. https://doi.org/10.5860/crl.78.3.403
    • This book describes how algorithms, as mathematical models, are responsible for a large number of our daily decisions — from car loans to health insurance to students’ grades. However, these decision processes remain largely opaque and unregulated. In addition to this, the author argues, reigning societal faith in the fairness of mathematical systems makes resistance very challenging when errors and discriminatory decision-making occurs. The author concludes with a call for greater responsibility with respect to regulation and algorithmic transparency.
  • Os, K. (2018). The misgendering machines: Trans/HCI implications of automatic gender recognition. Proceedings of the ACM  on  Human-Computer Interaction, 2(CSCW),  1-22. https://doi.org/10.1145/3274357 
    • This paper focuses on the use of Automatic Gender Recognition (AGR), which uses algorithms to determine an individual’s gender from photos and videos. While this technology is already used in industry and academia, this work argues that AGR consistently operationalizes gender by excluding trans individuals, which in turn increases their risk of harm as the technology becomes more widely used. With a focus on research in the field of Human-Computer Interaction, the author found that most research articles use AGR to measure gender as a binary, and fail to mention this as a limitation, or take note of any other limitations of the AGR models used.
  • Paullada, A., et al. (2020). Data and its (dis)contents: A survey of dataset development and use in machine learning research. NeurIPS 2020 Workshop: ML Retrospectives, Surveys & Meta-Analyses (ML-RSA), Virtual. https://ml-retrospectives.github.io/neurips2020/camera_ready/19.pdf 
    • This workshop paper focuses on the origins of datasets for machine learning. These artificial intelligence algorithms learn from datasets which often have unknown origins. The authors highlight four concerns related to this issue: (1) that social minorities and peoples from developing countries are not represented in the data; (2) that ML models use “shortcuts” to solve problems without striving for “reasoning capabilities;” that (3) some unnecessary problems are prioritized over others; and (4) that datasets are collected in unethical and dubious ways.
  • Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429–435. https://doi.org/10.1145/3306618.3314244
    • This article studies the impact of the seminal work of Gender Shades, an algorithmic audit of race and skin in facial recognition commercial applications. The authors evaluate the commercial applications from companies IBM, Microsoft, Megvii, Amazon, and Kairos. Overall, they found that these companies acted in response to the Gender Shades audit, releasing new APIs and improving their metrics at different degrees. This evaluation suggests that critical studies of algorithms eventually provide substantial changes to company policy.
  • Raji, I. D., et al. (2020). Saving face: Investigating the ethical concerns of facial recognition auditing. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.48550/arXiv.2001.00964h
    • This paper utilizes a new dataset, CelebSET, as a benchmark for Facial Processing Technology (FPT) containing diverse images of celebrities in order to show ethical concerns caused by the process of auditing FPTs. These ethical concerns include the privacy implications of overrepresented minority groups in benchmarking datasets, models performing well on specific subgroups tested by benchmarking at the expense of performing poorly on other intersectional groups, and overexposure to benchmark data causing models to overfit to this data. The paper uses these concerns to argue that benchmarking datasets are not enough to check for the efficacy of FPT models and to prevent the release of unfair models.
  • Schiller, A., & McMahon, J. (2019). Alexa, alert me when the revolution comes: Gender, affect, and labor in the age of home-based artificial intelligence. ​New Political Science,​ ​41(​2), 173–191.​ ​https://doi.org/10.1080/07393148.2019.1595288
    • This article uses Marxist feminism and theories of labor to interrogate gender, race, and affect within domestic artificial intelligence systems, such as Amazon’s Alexa or Google Home Assistant. The authors describe how such devices make reproductive labor in households more visible, while simultaneously obscuring the gendered and racialized dimensions of their designs in order to streamline their ​​effects for capital and heighten the affective dynamics they draw from.
  • Sheng, E., et al. (2021). Societal biases in language generation: Progress and challenges. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). https://doi.org/10.18653/v1/2021.acl-long.330 
    • This paper provides a survey of recent studies of human biases in Natural Language Processing (NLP) systems, specifically those used for Natural Language Generation (NLG). The authors outline multiple possible methods for debiasing NLG models including using debiased datasets, training methods that mitigate bias, and evaluation methods that catch biases, and discusses the current challenges in implementing them. They propose four areas of continued work: the diversification of datasets, understanding the trade-offs involved in mitigating biases, developing models that learn how to distinguish between fair and unfair generations, and more research into the potential negative impacts of biased NLG.
  • Stitzlein, S. M. (2004).* Replacing the ‘view from nowhere’: A pragmatist-feminist science classroom. Electronic Journal of Science Education, 9(2).
    • This article takes a critical stance on current pedagogical models of science adhering to traditional, objective and empirical ‘nature-based’ philosophical models. Such frameworks are considered by the author to be problematically masculine, disembodied, and aperspectival. The author adopts a sociological methodology, analyzing teachers’ philosophies of science by studying classroom practices. An alternative pedagogical model based on pragmatic-feminism and intersectionality of a ‘lived world’ is proposed in response to the outdated, traditional ‘view from nowhere.’
  • West, S. M., et al. (2019).* Discriminating systems: Gender, race and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html  
    • This is the first report in the AI Now Institute’s multi-year project examining race, gender, and power in AI.It presents a review of existing literature and current research on the topic of gender, race, and class. The authors focus on examining the scale of AI’s current diversity crisis and possible future strategies to mitigate its effects. The diversity problem within the AI industry and issues of bias in AI systems tend to be considered as separate issues. However, as the authors point out, discrimination in the workforce and system building are intrinsically linked and will both need to be addressed in order to design an effective solution.

Chapter 14. The Future of Work in the Age of AI: Displacement or Risk-Shifting? (Pegah Moradi and Karen Levy)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.17

  • Abdelrahman, M. (2022). The indefatigable worker: From factory floor to Zoom avatar. Critical Sociology, 48(1), 75-90.
    • The authors draw parallels between various attempts since the early 20th century to decrease worker fatigue (mental or physical) and the plight of the modern remote Zoom worker. Specifically, they highlight that these interventions are difficult to question or resist since they are often made in response to a crisis (e.g. Covid-19) and often justified by concern over worker well-being. New technological tools (including AI) used to fight fatigue may increase productivity, but prevent workers from questioning the structures, conditions and stresses that made the new intervention necessary.
  • Acemoglu, D., & Restrepo, P. (2018). The race between man and machine: Implications of technology for growth, factor shares, and employment. American Economic Review, 108(6), 1488–1542.
    • This paper examines concerns that new technologies, such as artificial intelligence (AI), will render labour redundant. The authors propose a framework where, when certain tasks become automated, new, more complicated tasks—in relation to which human labour has a comparative advantage—are introduced. The authors argue that if this comparative advantage is significant and the creation of new tasks continues, employment can remain stable even in the face of rapid automation.
  • Acemoglu, D., & Restrepo, P. (2022). Demographics and automation. The Review of Economic Studies, 89(1), 1-44.
    • The authors study theoretical and empirical evidence of the relationship between worker demographic and job automation. Specifically, they find that middle-aged workers in the US are more likely to have their jobs automated, particularly by the use and development of robots. Accordingly, industries with an ageing worker population (increasing ratio of older to younger workers) show greater automation across countries. The authors propose that the correlation between ageing and further automation could limit the productivity decline usually associated with ageing but may result in decreased labor force participation. 
  • Anteby, M., & Chan, C. K. (2018). A self-fulfilling cycle of coercive surveillance: Workers’ invisibility practices and managerial justification. Organization Science, 29(2), 247–263.
    • This paper outlines an endogenous explanation for the growth of surveillance in the workplace. The authors argue that increasing surveillance in the workplace leads to attempts by employees to go unseen and remain unseen. Management, in turn, interprets these attempts as justification for more surveillance, thus creating a self-perpetuating cycle.
  • Autor, D. H., et al. (2003).* The skill content of recent technological change: An empirical exploration. The Quarterly Journal of Economics, 118(4), 1279-1333. https://doi.org/10.1162/003355303322552801
    • This article argues that computers can substitute for workers in performing cognitive and manual tasks that can be accomplished by following explicit rules and complement workers in performing non-routine problem solving and complex communications tasks. It demonstrates that the falling price of computer capital in recent decades has been the causal force increasing the demand for workers who can perform non-routine tasks (i.e. college-educated). 
  • Ball, K. (2010). Workplace surveillance: An overview. Labor History, 51(1), 87-106. https://doi.org/10.1080/00236561003654776
    • This article reviews research findings about surveillance in the workplace and the issues surrounding it. It establishes that organizations and surveillance go hand in hand, and that workplace surveillance can take social and technological forms. Further, it identifies that workplace surveillance has consequences for employees, affecting employee well-being, work culture, productivity, creativity and motivation. It also highlights that employees are using information technologies to expose unsavory practices by employers and organizing collectively.
  • Braverman, H. (1998).* Labor and monopoly capital: The degradation of work in the twentieth century. NYU Press.
    • This book is an analysis of the science of managerial control, the relationship of technological innovation to social class, and the eradication of skill from work under capitalism. The book started what came to be known as the “the labor process debate”, which focuses closely on the nature of “skill” and the decline in the use of skilled labor as a result of managers’ strategy for control. 
  • Brynjolfsson, E., et al. (2018).* What can machines learn, and what does it mean for occupations and the economy? In AEA Papers and Proceedings, 108, 43-47.
    • This paper aims to answer the question of which occupational tasks will be most affected by machine learning (ML). Using a rubric evaluating tasks’ suitability for ML and applying it to over 18000 tasks, the paper finds that ML affects different occupations compared to previous waves of automation, most occupations have at least some tasks suitable for ML, few occupations are fully automatable using ML, and that realizing the potential of ML usually requires redesign of job task content.
  • David, H. (2015).* Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3-30.
    • This article argues that while automation can substitute human labor, it also complements it, increasing productivity and labor demand overall. Changes in technology may alter which jobs are available, and what those jobs pay. The author concludes that automation should be thought of as replacing workers in performing routine, codifiable tasks while amplifying the advantage of workers in supplying problem-solving skills, adaptability, and creativity.
  • Dickens, W. T., et al. (1989). Employee crime and the monitoring puzzle. Journal of Labor Economics, 7(3), 331-347. https://doi.org/10.1086/298211
    • This paper investigates reasons why firms spend considerable resources trying to monitor for employee malfeasance, despite most economic theories of crime predicting that profit-maximizing firms should follow a strategy of minimal monitoring with large penalties for employee crime. It finds that the most plausible explanations for spending and focusing on employee surveillance are legal restrictions on penalties in contracts and the adverse impact of harsh punishment schemes on worker morale.
  • Doleac, J. L., & Hansen, B. (2016). Does “ban the box” help or hurt low-skilled workers? Statistical discrimination and employment outcomes when criminal histories are hidden (No. w22469). National Bureau of Economic Research.
    • New ‘ban the box’ (BTB) policies prevent employers from conducting criminal background checks until late in the job application process to improve employment outcomes for those with criminal records and reduce racial disparities in employment. This paper tests BTB’s effects and finds that BTB policies actually decrease the probability of being employed by 5.1% for young, low-skilled Black men, and by 2.9% for young, low-skilled Hispanic men. The paper argues that when an applicant’s criminal history is unavailable, employers still discriminate against demographic groups that they believe are likely to have a criminal record.
  • Ekbia, H., & Nardi, B. (2014). Heteromation and its (dis)contents: The invisible division of labor between humans and machines. First Monday, 19(6). https://doi.org/10.5210/fm.v19i6.5331
    • This paper speaks to shifting conceptions and implementations of labour in human-computer assemblages. The authors argue that there has been a departure from technologies of automation (which dispermit human intervention) to technologies of heteromation (which permit humans to mediate critical tasks). Concepts of labour replacement have instead led to new cybernetic arrangements under hetoromation, which the paper demonstrates through both historical review and modern case studies.
  • Fantini, P., et al. (2020). Placing the operator at the centre of Industry 4.0 design: Modelling and assessing human activities within cyber-physical systems. Computers & Industrial Engineering, 139, 105058. https://doi.org/10.1016/j.cie.2018.01.025
    • This paper argues that a challenge of the so-called “Industry 4.0” will be guiding work towards increased responsibility and decision-making for employees as opposed to increased technological control. The authors then propose a methodology to address this challenge by considering both the uniqueness of human labor and the characteristics of “cyber-physical production.” 
  • Frank, M. R., et al. (2019). Toward understanding the impact of artificial intelligence on labor. Proceedings of the National Academy of Sciences, 116(14), 6531–6539.
    • This paper highlights the existence of barriers which currently inhibit scientists from measuring the effect of artificial intelligence (AI) and automation on the future of work. These barriers include a lack of access to high-quality data and empirically informed models about the nature of work, and an insufficient understanding of how cognitive technologies interact with broader economic dynamics and institutional mechanisms. The paper concludes by arguing for the development of a decision framework for the future of work that is focused on resilience to unexpected scenarios.
  • Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254-280. https://doi.org/10.1016/j.techfore.2016.08.019
    • In this paper, the authors calculate probabilities of computerization for 702 occupations using data about the task content of those jobs from the Department of Labor and have artificial intelligence experts code the tasks for automation potential. The study estimates that 47% of US jobs are at high risk of automation within approximately twenty years. The article shows that wages and educational attainment exhibit a strong negative relationship with an occupation’s automation potential.
  • Granulo, A., et al. (2019). Psychological reactions to human versus robotic job replacement. Nature Human Behaviour, 3(10), 1062–1069.
    • This paper explores people’s psychological reactions to the technological replacement of human labor. They find that while people prefer when human workers are replaced by other human workers, their preference reverses when they consider the prospect of their own job loss. In light of their findings, the authors posit that the unique psychological consequences of technological replacement of human labor should be taken into account by policy measures.
  • Gray, M. L., & Suri, S. (2019).* Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
    • In this book, the concept of “ghost work” is discussed, which refers to work done behind the scenes by an invisible human labor force. This labor force provides the internet and services by big tech companies with the appearance of smooth and “intelligent” function, through tasks such as flagging inappropriate content, proofreading, etc. The book explores problematic aspects of this growing sector including the lack of labor laws, precarity, lack of benefits, illegally low earnings, and more.
  • Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172. https://doi.org/10.1177/1094670517752459
    • This paper develops a theory of job replacement by artificial intelligence (AI) that specifies four intelligences: mechanical, analytical, intuitive, and empathic. The authors contend that AI is developing in a predictable order: mechanical preceding analytical, analytical preceding intuitive, and intuitive preceding empathic. Based on this ordering, the authors argue that the importance of “softer” (i.e. more intuitive and empathic) skills will become more important as AI continues to take over more analytic tasks.
  • Irani, L. (2015). The cultural work of microwork. New Media & Society, 17(5), 720-739. https://doi.org/10.1177/1461444813511926
    • Using Amazon Mechanical Turk (AMT) as a case study, the author examines how divisions of labor and software interfaces shape crowdsourcing systems. The paper argues that micro-labor platforms such as AMT engage in intensive cultural mediation, producing the differential categories of innovative labor and menial labor to service specific stakeholders. Throughout, algorithms, labor practices, and methods of worker control and cultivation are scrutinized.
  • Kellogg, K. C., et al. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410.
    • This paper explores how the widespread implementation of algorithmic technologies is reshaping organization control. The authors argue that algorithmic control in the workplace operates through six main mechanisms: direction (restricting and recommending), evaluation (recording and rating), and discipline (replacing and rewarding). Finally, the paper comments on a set of emerging tactics the authors call “algoactivism,” described as the resistance of algorithmic control by workers.
  • Kelley, M. R. (1990). New process technology, job design, and work organization: A contingency model. American Sociological Review, 55(2), 191-208. https://doi.org/10.2307/2095626
    • This paper aims to identify the conditions under which occupational skill upgrading occurs with technological change to answer the question of how workplaces that permit blue-collar occupations to take on higher skill responsibilities differ from those that do not. Data analyzed from a national survey of production managers in 21 industries reveals that the least complex organizations (small plant, small firm) tend to offer the greatest opportunities for skill upgrading, independent of techno-economic conditions. 
  • Levy, F. (2018). Computers and populism: Artificial intelligence, jobs, and politics in the near term. Oxford Review of Economic Policy, 34(3), 393-417. https://doi.org/10.1093/oxrep/gry004
    • This paper examines the future of work in the next few years to examine whether job losses induced by artificial intelligence will increase the appeal of populist politics. The paper explains that often computers and machine learning automate workplace tasks of blue collar workers. Using the example of automation-related job losses in three industries (trucking, customer service, and manufacturing), the paper examines how candidates may pit ‘the people’ (truck drivers, call center operators, factory operatives) against ‘the elite’ (software developers, etc.), replicating populist politics of the 2016 US presidential election. 
  • Levy, K., & Barocas, S. (2018).* Refractive surveillance: Monitoring customers to manage workers. International Journal of Communication, 12, 1166-1188.
    • This article discusses ‘refractive surveillance’, which is when information collected about one group can facilitate control over an entirely different group. The authors explore this dynamic in the context of retail stores, in which collecting data about customers allows for new forms of managerial control over workers. Mechanisms enabling this are dynamic labor scheduling, new forms of evaluation, externalization of worker knowledge, and replacement through customer self-service. 
  • Mateescu, A., & Elish, M.C. (2019). AI in context: The labor of integrating new technologies. Data & Society. https://datasociety.net/library/ai-in-context/
    • In this report, the authors demonstrate how the introduction of automation, Big Data, and artificial intelligence are reconfiguring workplaces. The first half of the report focuses on the integration of crop management tools and other data-intensive agritech platforms at family-owned farms; the second half on semi-autonomous self-checkout kiosks in grocery stores across North America. Throughout, emphasis is placed on the human infrastructures necessary to integrate and troubleshoot these ostensibly autonomous technologies.
  • Moniz, A. B., & Krings, B. J. (2016). Robots working with humans or humans working with robots? Searching for social dimensions in new human-robot interaction in industry. Societies, 6(3), 23. https://doi.org/10.3390/soc6030023
    • This article considers the social dimension of human-machine interaction (HMI), specifically in the manufacturing industry’s robotic systems. In particular, the article asserts that “intuitive” HMI should be considered a significant object of technical progress. The authors argue for increased attention towards the social—in addition to the technical—considerations of HMI, including examining the degree of trust that humans have in robots, and whether robots improve working conditions while increasing productivity.
  • Moradi, P. (2019). Race, ethnicity, and the future of work [Doctoral dissertation, Cornell University]. https://files.osf.io/v1/resources/e37cu/providers/osfstorage/5ca258dcecd788001998c0ac?action=download&version=2&direct&format=pdf
    • This study analyzes how occupational automation corresponds with racial and ethnic demographics. The paper finds that throughout American industrialization, non-White and immigrant workers shifted to low-wage, unskilled work because of the political and social limitations imposed upon these groups. While White workers are more heavily affected by automatability than other racial groups, the proportion of White workers in an occupation is negatively correlated with an occupation’s automatability. The paper offers a susceptibility-based approach to predicting employment outcomes from AI-driven automation.
  • Polanyi, M. (2009).* The tacit dimension. University of Chicago Press.
    • This book argues that tacit knowledge—tradition, inherited practices, implied values, and prejudgments—is a crucial part of scientific knowledge. This book challenges the assumption that skepticism, rather than established belief, lies at the core of scientific discovery. It concludes that all knowledge is personal, with the indispensable participation of the thinking being, and that even the so-called explicit knowing (or formal, or specifiable knowledge) is always based on personal mechanisms of tacit knowing.
  • Precarity Lab. (2020). Technoprecarious. The MIT Press.
    • This anthology serves as the culminating work of the Precarity Lab, an interdisciplinary network of scholars and activists at the University of Michigan, Ann Arbor. The essays within this anthology employ the term “precarity” to characterize how populations across disparate cultural and geographic sites have been disproportionately affected by newfound forms of inequality, insecurity, wealth centralization in the digital age. Case studies employ critical theory and postcolonial studies to analyze micro-labor platforms, manufacturing contexts, and making practices.
  • Rogers, B. (2020).* The law & political economy of workplace technological change. Harvard Civil Rights-Civil Liberties Law Review, 55. http://dx.doi.org/10.2139/ssrn.3327608
    • This paper makes the case that automation is not a major threat to most jobs today, nor will it be in the near future. However, it points out that existing labor laws allow companies to leverage new technology to control workers, such as through enhanced monitoring. It argues that policymakers must expand the scope and stringency of companies’ duties toward their workers, or rewrite policies in ways that enable workers to push back against the introduction of new workplace technologies.
  • Rosenblat, A., et al. (2017). Discriminating tastes: Uber’s customer ratings as vehicles for workplace discrimination. Policy & Internet, 9(3), 256-279. https://doi.org/10.1002/poi3.153
    • This paper analyzes the Uber platform as a case study to explore how bias may creep into evaluations of drivers through consumer-sourced rating systems and draws on social science research to demonstrate how such bias emerges in other types of rating and evaluation systems. The paper argues that while companies are legally prohibited from making employment decisions based on certain characteristics of workers (e.g. race), their reliance on potentially biased consumer ratings to make material determinations may nonetheless lead to a disparate impact in employment outcomes. 
  • Schneider, D., & Harknett, K. (2016). Schedule instability and unpredictability and worker and family health and wellbeing. Washington Center for Equitable Growth Working Paper Series. http://cdn.equitablegrowth.org/wp-content/uploads/2016/09/12135618/091216-WP-Schedule-instability-and-unpredictability.pdf
    • This paper describes an innovative approach to survey data collection from service sector workers that allows for the collection of previously unavailable data on scheduling practices, health, and wellbeing. They then use this data to show that exposure to unstable and unpredictable scheduling practices is negatively associated with household financial security, worker health, and parenting practices.
  • Thomas, R. J. (1994). What machines can’t do: Politics and technology in the industrial enterprise. University of California Press.
    • This book explores the social and political dynamics that are an integral part of production technology through conducting over 300 interviews inside four successful manufacturing enterprises, from top corporate executives to engineers to workers and union representatives. The author urges managers to not put blind hopes into smarter machines but to find smarter ways to organize people and argues against the popular idea that smart machines alone will lead to advancement. 
  • Tippett, E., et al. (2017). When timekeeping software undermines compliance. Yale Journal of Law and Technology, 19(1), 1-76. 
    • This article examines 13 commonly used electronic timekeeping programs to expose the ways in which it can erode wage law compliance. Drawing on insights from the field of behavioral compliance, the authors explain how the software presents subtle cues that can encourage and legitimize wage theft by employers. The article examines gaps in legislation that have created a regulatory vacuum in which timekeeping software has developed, and reforms to encourage wage law compliance across workplaces.
  • Van Oort, M. (2019). Employing the carceral imaginary: An cthnography of worker surveillance in the retail industry. In Benjamin, R. (Ed.), Captivating technology: Race, carceral technoscience, and liberatory imagination in everyday life (pp. 209-226). Duke University Press.
    • In this chapter of Ruha Benjamin’s 2019 anthology, the author speaks to the growing omniscience of modern workplace surveillance. Beginning with a critical history of employee monitoring, the author segues into an ethnographic account of both the retail workspace and their attendance to NRF Protect: an industry conference on loss prevention, digital fraud, and cybersecurity. From here, the author emphasizes that workplace surveillance is neither uni-directional nor exclusively tethered to professional labor, often serving to reiterate the systemic capturing of marginalized individuals.
  • Vimalkumar, M., et al. (2021). Understanding the effect that task complexity has on automation potential and opacity: Implications for algorithmic fairness. AIS Transactions on Human-Computer Interaction, 13(1), 104-129.
    • Automation is often discussed in terms of balancing possible economic gains with ethical concern over worker displacement. This paper proposes a different kind of concern about the fairness of algorithms used to automate different kinds of tasks. Since different tasks are automated with different algorithms, this analysis requires a more fine-grained categorization of tasks and algorithms, which the paper provides. Specifically, they provide a typology of tasks based on complexity and analyze the relationship between task complexity, automation potential, and algorithmic opacity.

Chapter 15. AI as a Moral Right-Holder (John Basl and Joseph Bowen)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.18

  • Andreotta, A. J. (2021). The hard problem of AI rights. AI & Society, 36(1), 19–32. https://doi.org/10.1007/s00146-020-00997-x
    • This paper approaches the setting of the “hard problem” (or alternatively, the hard question) of consciousness: why do certain brain states give rise to experience? The author considers three ways (superintelligence, empathy, and a capacity for consciousness) that claim in favor of enshrining AI rights. Arguing for consciousness as a central focus,the author draws a distinction between consciousness in the context of animal rights cases and in AI rights cases and emphasizes that one cannot be conclusively categorized in terms of the other. The author suggests that  if humans do not come to understand how consciousness arises, humans may inadvertently create creatures that are conscious and cause them to suffer without realizing.
  • Baertschi, B. (2012). The moral status of artificial life. Environmental Values, 21(1), 5–18. http://www.jstor.org/stable/23240349
    • This paper asserts that an entity’s status as “natural” or “artificial” in the genetic sense does not have an impact on its moral status. The author states that if two living beings with moral status are similar, but have been produced differently, their moral status is identical—except if the way they have been produced changes their intrinsic properties. The author discusses reasons for the confusion over“natural” and “artificial” as an ontological distinction (with moral consequences). They state that to understand this comparison as a moral distinction, with no ontological consequences is more effective.
  • Basl, J. (2013). The ethics of creating artificial consciousness. APA Newsletter on Philosophy and Computers, 13(1), 23–29. https://philarchive.org/archive/BASTEO-11
    • The article argues that research aiming to create artificial entities with conscious states might be unethical because it wrongs, or will likely wrong, its subjects. The author argues that if the subjects of artificial consciousness research end up possessing conscious states, then they are research subjects in the way that sentient non-human animals and human beings are research subjects. As a result, such artificially conscious research subjects should be afforded protections against damages.
  • Basl, J. (2014). Machines as moral patients we shouldn’t care about (yet): The interests and welfare of current machines. Philosophy & Technology, 27(1), 79–96. https://doi.org/10.1007/s13347-013-0122-y
    • Situating a discussion of moral status within Interest Theory, this paper considers the potential future moral patiency status of artificial consciousnesses. Distinguishing systems exhibiting teleological interests and goal pursuits (such as biological and environmental systems) from those exhibiting the psychological interests associated with moral patiency, the author asserts that machines are not yet moral patients. By means of a brief survey of both epistemic and moral questions that researchers currently encounter, the author says that if artificial consciousnesses come to exist that have the capacity for attitudes commensurate with psychological interests, these artificial consciousnesses could have psychological interests that ground their status as moral patients.
  • Basl, J. (2014). What to do about artificial consciousness. In R. L. Sandler (Ed.), Ethics and emerging technologies (pp. 380–392). Palgrave Macmillan.
    • This chapter defends an account of moral status according to which the moral status of an entity is determined by its capacities. For example, if an intelligent machine possesses cognitive and psychological capacities akin to those of humans, such entities should be accorded comparable moral status. Nevertheless, the author argues that it is unlikely that machines will possess cognitive and psychological capacities akin to those of humans. Even if they do, the author asserts that it will be difficult for humans to discern whether such capacities and interests are present in a non-human entity.
  • Basl, J. (2019).* The death of the ethic of life. Oxford University Press.
    • The ethic of life states that all living things deserve some degree of moral concern, i.e., that moral status is assigned upon evidence of sentience. However, if it is argued that the well-being of non-sentient beings is morally significant insofar (and perhaps only insofar) as it matters to sentient beings, then an ethic of life’s sentience criterion fails to capture how the moral significance of artifacts differs from that of organisms.  
  • Basl, J., & Sandler, R. (2013). The good of non-sentient entities: Organisms, artifacts, and synthetic biology. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 44(4), 697-705.  https://doi.org/10.1016/j.shpsc.2013.05.017
    • This paper examines whether or not synthetic organisms have a good of their own and, consequently, are themselves deserving of moral consideration. Appealing to an account of teleology that explains the good of non-sentient organisms, the authors argue that synthetic organisms also have a good of their own that is grounded in their teleological organization. Such a rationale, however, introduces the consequence of traditional artifacts arguably also having a good of their own.
  • Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209–221. https://doi.org/10.1007/s10676-010-9235-5
    • The paper examines the need for an alternative approach to moral consideration that can shape relations between humans and intelligent robots. The author explores a number of conceptual avenues that could recognize and even respect a setting of systemhood as well as subjecthood. Their social-relational approach rejects the idea of fixed criteria for moral status; this paper also rejects the idea that a robot or other artificial entity must carry a permanent sort of “moral backpack” to be deserving of recognition. Rather, the entity’s dynamic and evolving relations might instead be assessed in a temporal and situational context. Further, a combined approach that draws from settings of both systemhood and subjecthood could be engaged.
  • Coman, A., & Aha, D.W. (2018). AI rebel agents. AI Magazine, 39(3), 16–26. https://doi.org/10.1609/aimag.v39i3.2762
    • Asserting that the capacity to say “no” to a request is an essential part of being sociocognitively human, the authors argue that it is beneficial for certain AI agents to rebel for positive, defensible, and allegedly “moral” reasons. Suggesting that AI may never become socially intelligent absent such contextual noncompliance, the authors present a phased framework that situates the “rebel agent” terminologically, narratively, and systematically, enabling an examination of positive and negative roles that the noncompliant agent could assume.
  • Cruft, R. (2013).* XI—Why is it disrespectful to violate rights? Proceedings of the Aristotelian Society, 113(2), 201–224. https://doi.org/10.1111/j.1467-9264.2013.00352.x
    • Directed duties are duties that are owed to a particular person or group. This author considers the manner in which directed duties are related to respect. It also works to make sense of the fact that directed duties are often justified independently of whether or not they do anything for those to whom the duties are owed. 
  • Danaher, J. (2020). Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics, 26(4), 2023–2049.  https://doi.org/10.1007/s11948-019-00119-x
    • This paper proposes a theory of ethical behaviorism, according to which robots can possess significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. The author argues that this performative threshold may not exceed the reach of robots and, if robots have not done so already, they may cross the threshold in the future. The author proposes a principle of procreative beneficence that governs the decision to create robots that possess moral status.
  • De Rouck, F. (2019). Moral rights & AI environments: The unique bond between intelligent agents and their creations. Journal of Intellectual Property Law & Practice, 14(4), 299–304. https://doi.org/10.1093/jiplp/jpz010
    • The author considers approaches regulating the protection and ownership of new types of work involving AI systems. They use the example of RADAR (an AI system yielding “fact-based insights into local communities using Natural Language Generation software”) to demonstrate the necessity of AI-assisted work and the necessity of the bond between intelligent agents and their outputs (in addition to the bond with human contributors). The author suggests this is a necessary measure to “seize data as the key economic asset of the future”. 
  • Formosa, P., & Ryan, M. (2021). Making moral machines: Why we need artificial moral agents. AI & Society, 36(3), 839–851. https://doi.org/10.1007/s00146-020-01089-6
    • In response to arguments against the development of Artificial Moral Agents (AMA), the authors propose that there remain strong reasons to continue their responsible development. Their three contributions are as follows: the first comprehensive response to the arguments against AMAs by Wynsberghe and Robbins (in “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics 25, 2019); to “collate and thematise for the first time” the key arguments supporting and opposing AMAs in one paper; and to begin a nuanced discussion considering the contexts and their corresponding types of appropriate AMAs. 
  • Gilbert, M., & Martin, D. (2022). In search of the moral status of AI: Why sentience is a strong argument. AI & Society, 37319–330. https://doi.org/10.1007/s00146-021-01179-z
    • This paper considers different arguments for granting moral status to an artificial intelligence (AI) system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. Leaving the idea of indirect duties aside, since such duties do not imply considering an AI system for its own sake, the authors reject both the relational argument and the argument from intelligence. Acknowledging that the argument from life may work in a weak sense, the authors point to sentience as a stronger argument for grounding the moral status of an AI system. This determination draws upon the Aristotelian principle of equality, which states that what is identical should be treated identically. However, this claim of sameness relies upon technological development that has not yet been realized.
  • Goodwin, G. P. (2015) Experimental approaches to moral standing. Philosophy Compass, 10, 914–926. https://doi.org/10.1111/phc3.12266
    • This paper argues that understanding the factors which underlie moral status attribution is important, as they indicate how broadly (or narrowly) individuals conceptualize the moral world and how various entities, both human and non-human, should be treated. The author examines a series of studies conducted by both psychologists and philosophers that have revealed three main drivers of moral standing: the capacity to suffer (psychological patiency), intelligence or autonomy (agency), and the nature of an entity’s disposition (whether it is harmful). These studies have also revealed causal links between moral standing and other variables of interest, namely mental state attributions and moral behavior.
  • Gordon, J.-S. (2021). Artificial moral and legal personhood. AI & Society, 36, 457–471. https://doi.org/10.1007/s00146-020-01063-2
    • This paper responds to European Parliament’s resolution on Civil Law Rules on Robotics (2017) and its recommendation that robots be granted legal status and electronic personhood. The author argues that moral and legal personhood should not be granted to currently existing robots, given their technological limitations and their failure to meet the morally relevant criteria (rationality, autonomy, understanding, and having social relations) necessary to have moral rights bestowed upon them. The author examines two analogies that have been proposed, the first between robots and corporations (which are treated as legal persons) and the second between them and animals. The author states that one should consider attribution of moral personhood to robots only once robots have achieved certain capacities comparable to humans.
  • Gordon, J-S. (2020). What do we owe to intelligent robots? AI & Society, 35, 209–223. https://doi.org/10.1007/s00146-018-0844-6
    • This paper focuses upon whether highly advanced artificially intelligent entities will deserve moral rights once they become capable of moral reasoning and decision-making. The author argues that humans are obligated to grant moral rights to such entities once they have become full ethical agents, i.e., subjects of morality. The author presents four related arguments in support of this claim, and thereafter examines four main objections to this claim. The author further states that given their ever-increasing involvement in many sensitive fields and their increasing social interaction with humans, it is important that “intelligent robots” learn how to make moral decisions and act according to these decisions.
  • Griffin, J. (1986).* Well-being: Its meaning, measurement and moral importance. Clarendon Press.
    • Enumerating an overlapping set of prudential values that combine to produce a sort of well-being that constitutes human flourishing, this book approaches well-being in terms of action towards fulfillment of informed desires. Loosening the delineations easily afforded by dichotomies of “objective” and “subjective,” the author takes a pluralistic view of notions of utility so as to resist merely psychological renderings of the basis of a metric by which to measure well-being.
  • Gunkel, D. J. (2014). A vindication of the rights of machines. Philosophy & Technology, 27(1), 113–132. https://doi.org/10.1007/s13347-013-0121-z
    • This paper asserts that questions concerning the “rights of machines” make a general and fundamental claim on ethics, requiring ethics practitioners to rethink the system of moral considerability all the way down. Addressing the insufficiency of exact and exclusive lists of minimal conditions necessary for the status of moral agency or moral patiency, the author contrasts such lists with Floridi’s information ethics and Levinas’ ethical encounter with the face of the Other. These two alternative lenses do not themselves resolve the issue, but rather emphasize even further how the question of moral standing must be thoroughly reevaluated in the face of the intelligent machine.
  • Gunkel, D. J. (2018). Robot rights. MIT Press.
    • Engaging the still unresolved proposition of whether robots should have rights, the author draws from the philosophy of Levinas to situate human-robot moral encounters in terms of the command of the face of the Other, as that which supervenes upon human selfhood and instantiates unavoidable responsibility. Relationally presented, such an ethical system is contrasted with, e.g., deontological, prior and rule-based assumptions of an ethics of AI; it is important to acknowledge, however, that all approaches mentioned still involve anthropocentric assumptions.
  • Gunkel, D. J. (2018). The other question: Can and should robots have rights? Ethics and Information Technology, 20(2), 87–99. https://doi.org/10.1007/s10676-017-9442-4
    • This paper engages with the question of whether robots should have rights. In doing so, it examines how the terms “can” and “should” figure in discussions surrounding the is-ought problem. The author turns their attention to the work of Emmanuel Levinas to reformulate the manner in which one asks about moral patiency in the first place. They discuss the view that moral consideration is conferred in the face of actual social relationships and interactions, rather than pre-determined ontological criteria or capability.
  • Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20(4), 291–301. https://doi.org/10.1007/s10676-018-9481-5
    • This paper contends that analogies between humanoid robots and animals do not provide a useful method of understanding the nature of robots; responsible discourse concerning the nature of robots should therefore be cautious in its appeal to analogies with animals. The authors discuss how such analogical framing can mislead efforts to understand the moral status of humanoid robots and notions of potential legal liability associated with them.
  • Kramer, M. H. (2001).* Getting rights right. In M. H. Kramer (Ed.), Rights, wrongs and responsibilities (pp. 28–95). Palgrave Macmillan.
    • This essay aims to clarify and develop the basic claims of the Interest Theory and the Will Theory, placing preference upon the former. The Interest Theory holds that the essence of a right consists in the normative protection of some aspect(s) of the right-holder’s well-being. In contrast, the Will Theory claims that the essence of a right consists in the right holder’s opportunities to make normatively significant choices relating to the behavior of others.
  • Lima, G., et al. (2021). On the social-relational moral standing of AI: An empirical study using AI-generated art. Frontiers in Robotics and AI, 8, 719944. https://doi.org/10.3389/frobt.2021.719944
    • The authors respond to Gunkel and Coeckelbergh’s proposals to ground machines’ moral status with online experiments testing whether interactions with AI-generated art affects the perceived moral standing of its creator. The results indicate overvaluation of AI- generated images could negatively affect AI creator’s perceived agency, especially given their societal status as competitive agents without “minds”. This is an important contribution to the underdeveloped body of literature dissecting public perception of the AI creators’ moral standing in society.
  • McGinn, C. (1999).* The mysterious flame. Basic Books.
    • Confronting the limits of both materialist and dualist versions of the mind-brain problem, this book argues that a radically different approach would be required to understand the nature of and rationale for consciousness, and more intriguingly of self-consciousness. Asserting that the mind-brain question cannot be answered by humans due to the way that human minds are constructed, the author considers how one might arrive at this view, approaching, e.g., how reconceiving of the human understanding of space (via, theoretically, modes of genetic engineering) might enable aspects of such an understanding.
  • Miernicki, M., & Ng, I. (2021). Artificial intelligence and moral rights. AI & Society, 36(1), 319–329. https://doi.org/10.1007/s00146-020-01027-6
    • The authors call to attention whether an artificial intelligence itself, or instead the creator or even the users of such technology would be entitled to the moral rights linked to the products of an AI system. They foresee that once AI develop personalities, we will enter a post-human era where AI will be equal to humans and recognized with the same rights. 
  • Miller, L. F. (2015). Granting automata human rights: Challenge to a basis of full-rights privilege. Human Rights Review, 16(4), 369–391. https://doi.org/10.1007/s12142-015-0387-x
    • This paper examines whether or not human beings are morally required to extend full human rights to human-like automata. In examining this issue, the paper reflects on the ontological difference between human beings and automata, namely, that automata have a constructor and a given purpose. The author argues that human beings need not be under any moral obligation to confer full human rights to automata.
  • Moon, Aj., et al. (2021). Ethics of corporeal, co-present robots as agents of influence: A review. Current Robotics Reports, 2(2), 223–229. https://doi.org/10.1007/s43154-021-00053-6
    • The authors consider the unique ethical challenges of interactive humanoids and how to mitigate them in their design stages. The authors emphasize differentiating robotics and AI ethics issues and call for greater attention due to the rising interest in human-robot interaction in recent years.
  • Mosakas, K. (2021). On the moral status of social robots: Considering the consciousness criterion. AI & Society, 36, 429–443. https://doi.org/10.1007/s00146-020-01002-1
    • This paper outlines the consciousness criterion for moral status. It considers three prominent approaches to moral consideration that have been used to justify the claim that direct moral duties are owed to social robots. The author concludes that none of these approaches surpass a standard properties-based view that presupposes the consciousness criterion. The author argues that social robots should not be regarded as proper objects of moral concern unless, and until, they become capable of having conscious experience. While this does not entail that they should be excluded from human moral reasoning and decision-making altogether, it does suggest the implausibility of the assumption that humans owe direct moral duties to entities like social robots.
  • Neely, E. L. (2014). Machines and the moral community. Philosophy & Technology, 27(1), 97–111. https://doi.org/10.1007/s13347-013-0114-y
    • Arguing that the sentience criterion for moral standing is insufficient to cover all humans, the author argues that the sentience criterion is insufficient as a rationale to deny moral standing to a non-human entity. Stating that there are several ways that an entity may have interests, this paper presents an interest-based account for determining an entity’s moral status. If an entity has interests, and may thereby be harmed or benefited, the author urges moral generosity when considering the moral claims of machines and the recognition of moral claims of those who (or those that) are physically unlike humans.
  • Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers.
    • The author examines emerging ethical issues concerning human beings, robots, and agency. In the discussion of robot rights, the author argues that it can sometimes make sense to treat robots with some degree of moral consideration; for instance, in cases where robots look and act like human or non-human animals. Nevertheless, robots are not themselves deserving of direct duties until they develop a human- or animal-like inner life.
  • Raz, J. (1986).* The morality of freedom. Clarendon Press.
    • Discussing the nature of freedom and authority, this book argues that a concern with autonomy underlies the value of freedom, and the rights and choices that freedom allows to be realized actively. Autonomy becomes actively realized only if the subject is situated so as to have an array of valid and available options from which to choose. Thus, against conventionally liberal positions, the author argues that political and societal morality is neither rights‐based, nor equality‐based, but is instead driven by the interaction between structures and social forms of authority, and the requirements of individual autonomy.
  • Scheessele, M. (2018). A framework for grounding the moral status of intelligent machines. In J. Furman, G. Marchant, H. Price, & F. Rossi (Eds.), AIES ’18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 251-256). Association for Computing Machinery. https://doi.org/10.1145/3278721.3278743
    • This paper proposes that the moral status of current and foreseeable intelligent machines might draw from the status accorded to environmental entities (such as plants and trees) that are likewise teleologically-directed. This paper’s analysis grounds its propositions upon a network or system’s possession of a functional (as opposed to actual) morality or moral agency. The author asserts a hierarchy in which the limits of obligations to intelligent machines, thus categorized, would fall short of human obligations to entities that are recognized as sentient.
  • Schwitzgebel, E., & Garza, M. (2015). A defense of the rights of artificial intelligences. Midwest Studies in Philosophy, 39(1), 98–119. https://doi.org/10.1111/misp.12032
    • This paper provides a positive argument for the rights of artificially intelligent entities. The authors offer two principles of ethical AI design; namely, (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. The paper also argues that human beings would probably owe more moral consideration to human-grade artificial intelligences than is owed to human strangers.
  • Sebo, J. (2017). Agency and moral status. Journal of Moral Philosophy, 14(1), 1–22. https://doi.org /10.1163/17455243-46810046
    • Stating that recent developments in philosophy and psychology have clarified the need for more than one conception of agency, this paper presents a distinction between perceptual agency and propositional agency. The author argues that many nonhuman animals are perceptual agents and that many humans are agents of both kinds. The author goes on to assert that insofar as human and nonhuman animals exercise the same kind of agency, they have the same kind of moral status, and explores some of the moral implications of this idea. For example, what legal or political rights might humans or nonhumans have or lack, insofar as each acts perceptually?
  • Shepherd, J. (2021, forthcoming). The moral status of conscious subjects. In S. Clarke, H. Zohny, & J. Savulescu (Eds.), Rethinking Moral Status. Oxford University Press.
    • Offering an account of phenomenal value that focuses upon the structure of phenomenally conscious states at specific times and over time, this paper discusses the need for a theory of the grounds of moral status that could guide practical considerations regarding how to treat a wide range of potentially conscious entities, e.g., injured humans, cerebral organoids, chimeras, artificially intelligent machines, and non-human animals. The author states that this theory of moral status needs to be mapped onto practical considerations to clarify how both phenomenal value and moral status may vary across different entity types.
  • Smith, B. C. (2019). The promise of artificial intelligence: Reckoning and judgment. The MIT Press.
    • Defining a distinction between “reckoning” and “judgment,” the author presents a fundamental difference — not of degree, but of kind—between human and machine intelligences. Unpacking the notion of intelligence itself, the author examines the history of AI from its first-wave origins to recent advances in machine learning. Warning that superlative machine achievements in calculative reckoning do not translate to ethical and responsible judgment, they challenge the capability of machines to be moral rights holders. Delineating human and machine roles, the author suggests that the development of superior machine reckoning has powerful implications, ones that impute less for the machine’s moral status than for near future human decision making.
  • Sullins, J. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–30. https://informationethics.ca/index.php/irie/article/view/136
    • This paper argues that robots can be seen as moral agents in select circumstances. Drawing a distinction between the categories of “person” and “moral agent,” the author asserts that robots are moral agents when there is a reasonable level of abstraction under which the machine has autonomous intentions and responsibilities. If the robot can be seen as autonomous from many points of view, then, the author states, the machine is to be viewed as a robust moral agent. This implies that highly complex interactive robots of the future will be moral agents with corresponding rights and responsibilities. 
  • Sumner, L. W. (1996).* Welfare, happiness, and ethics. Clarendon Press.
    • This book presents an original theory of welfare which closely connects welfare with happiness or life satisfaction. The author provides a defense of welfarism, which argues that welfare is the only basic ethical value. That is, welfare is the only thing for which one has a moral reason to promote for its own sake.
  • Tannenbaum, J., & Jaworska, A. (2021). The grounds of moral status. In Edward N. Zalta (Ed.), Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2021/entries/grounds-moral-status
    • This entry in the Stanford Encyclopedia of Philosophy offers an overview and bibliography on the titular topic. This entry was substantially updated in March 2021 to reflect current scholarship.
  • Tavani, H. T. (2018). Can social robots qualify for moral consideration? Reframing the question about robot rights. Information, 9(4), 1–16. https://doi.org/10.3390/info9040073
    • This author contends that the question of whether or not robots deserve rights needs to be reframed and refined, asking instead whether or not social robots qualify for moral consideration as moral patients. Social robots are understood as physically embodied robots that are socially intelligent and interact with humans in a similar manner to the way humans interact with one another. The author appeals to the work of Hans Jonas in arguing for the conclusion that social robots are moral patients and, consequently, deserve moral consideration.
  • Thomson, J. J. (1990).* The realm of rights. Harvard University Press.
    • Distinguishing the idea of an individual possessing a right, duty, or claim from the idea of what ought to be done in the world, the author asserts that rights hold an independent status in the moral realm. In dedicating significant attention to the moral status of claims within this discussion, this book addresses, among other angles, the ability to forfeit claims, and when it is or is not permissible to prevent infringement upon a right or claim of another.
  • Wetlesen, J. (1999). The moral status of beings who are not persons: A casuistic argument. Environmental Values, 8(3), 287–323. https://doi.org/10.3197/096327199129341842
    • Asking who or what can have a moral status, this paper argues for a biocentric position that ascribes inherent moral status value to all individual living organisms. This position, the author states, must be defended against an anthropocentric position. The author presents an argument for equal moral status value for moral persons and agents, and gradual moral status value for nonpersons, according to their degree of similarity with moral persons. The argument is constructed as a casuistic argument, proceeding by analogical extension from persons to nonpersons.

Chapter 16. Could You Merge with AI? Reflections on the Singularity and Radical Brain Enhancement (Cody Turner and Susan Schneider)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.19

  • Benedikter, R., & Fathi, K. (2019). The future of the human mind: Techno-anthropological hybridization? Challenge, 62(1), 77–95. https://doi.org/10.1080/05775132.2018.1560943 
    • The authors speculate about the emerging ‘neuro-industrial complex’, a complex they posit is emerging out of the military-industrial complex as human-machine interaction is slowly being replaced by human-machine convergence. They argue that, rather than obstruct or obfuscate the humanistic argument, the consciousness industry and those involved in future (and present) machine-human convergence technologies should foster both the humanism and transhumanism discussion, integrating them and allowing them into steady dialogue before conflict emerges.
  • Benedikter, R., & Siepmann, K. (2016). “Transhumanism”: A new global political trend? Challenge, 59(1), 47–59. https://doi.org/10.1080/05775132.2015.1123574 
    • Working within the context of emerging transhumanism political movements, the authors discuss the growing global interest in advancing humans ‘beyond’ what they currently are through technology. They observe discordance in transhumanist political movements themselves and examine the speculation of external observers on how these movements can threaten current hegemonic systems. They caution that these neo-political groups and movements should be given more serious consideration.
  • Biocca, F. (1996). Intelligence augmentation: The vision inside virtual reality. In B. Gorayska & J. L. Mey (Eds.), Cognitive technology: In search of a humane interface (pp. 59–75). Elsevier Science. https://doi.org/10.1016/S0166-4115(96)80023-9
    • This chapter considers the nature of reality itself as virtual reality simulations become increasingly realistic and immersive. The authors go beyond the obvious sensory augmentation that comes with virtual reality and explore how virtual environments can augment cognition by facilitating the projection of complex ideas onto a visible medium. The outsourcing and expansion of one’s imagination is presented as an amplification of their cognitive abilities.
  • Bostrom, N., & Roache, R. (2007).* Ethical issues in human enhancement. In T. S. Petersen, J. Ryberg, & C. Wolf (Eds.), New waves in applied ethics (pp. 120-152). Palgrave Macmillan. 
    • A survey of issues in human enhancement ethics. The authors highlight scholars’ coverage of the therapy / enhancement distinction. As they point out, this distinction is often ambiguous, and some thinkers reject it altogether.
  • Bostrom, N., & Roache, R. (2011).* Smart policy: Cognitive enhancement and the public interest. In J. Savulescu, R. T. Meulen, & G. Kahane (Eds.), Enhancing human capacities (pp. 138-152). Wiley-Blackwell. 
    • This paper discusses the nature and ethics of cognitive enhancement. The authors address several related policy issues, including drug approval criteria, research funding, and regulation of access.
  • Bostrom, N. (2014).* Superintelligence: Path, dangers, strategies. Oxford University Press. 
    • This book covers the history of artificial intelligence, paths to superintelligence, and forms the latter may take, including brain-computer interfaces. Bostrom then considers the prospect of an intelligence explosion, and several challenges posed by the control problem.
  • Buchanan, A. (2011). Beyond humanity? The ethics of biomedical enhancement. Oxford University Press.
    • This book addresses a number of issues in the context of human enhancement, including the therapy / enhancement distinction, human development, character concerns, human nature, conservatism, unintended bad consequences, moral status, and distributive justice. The author offers a general outlook that is, if not pro-enhancement, then anti-anti-enhancement.
  • Burden, D., & Savin-Baden, M. (2019). Virtual humans: Today and tomorrow (1st ed.). CRC Press. https://doi.org/10.1201/9781315151199 
    • The authors examine the concept of the ‘virtual human’, defining the term against the backdrop of artificial intelligence and concepts of i.a., identity, bodies, agency, and digital immortality. Not just theoretical, this book is both practical and grounded in the examination and speculation of future roles that virtual humans might play in our society.  
  • Chalmers, D. J. (2016).* The singularity: A philosophical analysis. In S. Schneider (Ed.), Science fiction and philosophy: From time travel to superintelligence (2nd ed., pp. 171-224). Wiley-Blackwell. 
    • This paper offers a comprehensive study of the singularity. The author explains the logic behind the singularity, as well as how it may or may not be promoted. The author then discusses mind-uploading and personal identity in the context of surviving in a post-singularity world.
  • Clark, A., & Chalmers, D. J. (1998).* The extended mind. Analysis, 58(1), 7-19. http://dx.doi.org/10.1093/analys/58.1.7 
    • The authors’ extended mind hypothesis suggests that the environment plays an active role in our mental processes, which has implications for how such devices are conceptualized as wrapped up in our very identities.
  • Danaher, J., & Petersen, S. (2020). In defence of the hivemind society. Neuroethics, 14(2), 253–267. https://doi.org/10.1007/s12152-020-09451-7 
    • Observing that the concept of the ‘hivemind’ is often lauded as a frightening and grossly dehumanizing future-scenario, the authors provide an alternative approach, arguing that rather than seeing the integration of mind, bodies, and technologies as dystopian, there remains possibilities that the hivemind society could help us flourish. These arguments are presented not as a call to embrace the hivemind society, but rather to consider it from an axiological perspective. 
  • Fukuyama, F. (2002). Our posthuman future: Consequences of the biotechnology revolution. Picador.
    • This book contributes to the discussion of human enhancement ethics. The author argues that transhumanism is the world’s most dangerous idea because tampering with human nature threatens to undermine the basis for human dignity and rights. This book reflects on the future of biotechnology, and how it might be regulated.
  • Giesen, K. (2020). The transhumanist ideology and the international political economy of the fourth industrial revolution. In K. Giesen (Ed.), Ideologies in world politics (pp. 143–156). Springer VS Wiesbaden. https://doi.org/10.1007/978-3-658-30512-3_9 
    • Taking a political-economic approach, the author argues that this ‘fourth industrial revolution’ represents a notable break in the evolution of traditional capitalistic ideologies, and that the transhumanistic movement is both more granular and more intertwined with corporate interests than has been previously observed. The author posits that both grand-plan economics and politics are the backdrop from which vested high-tech corporate interests can pivot and maneuver themselves so that the transhumanist movement can be to their continuing benefit.
  • Gleiser, M. (2015). Welcome to your transhuman self. In J. Brockman (Ed.), What to think about machines that think: Today’s leading thinkers on the age of machine intelligence (pp. 54-55). Harper Perennial.
    • This paper reflects on the human-machine integration scenario. The author points out that this process of cyborgization is already underway, with cell phones and social media existing along the same spectrum as mechanical limbs and brain implants.
  • Goertzel, B. (2012). Should humanity build a global AI nanny to delay the singularity until it’s better understood? Journal of Consciousness Studies, 19(1), 96–111.
    • Positing that if the AI Singularity is not reached within the next few centuries that the reasons for this will be purely motivational (i.e., out of fear or caution), the author suggests the idea of an ‘AI Nanny’. The ‘AI Nanny’ would represent a middle-ground entity between current artificial intelligence and superintelligent AI, which could be used to purposefully delay the Singularity until humanity has reached a point where it is both confident and capable enough to unleash the technology in a non-harmful way.
  • Hume, D. (1985). A treatise of human nature (E. C. Mossner, Ed.). Penguin Classics.
    • This book is notable for its chapter on personal identity. Hume expresses a skeptical view of personal identity, or the self, now known as bundle theory. Essentially, humans are collections of impressions, constantly in flux. There is no ‘I’ over and above these impressions which can be said to possess them. 
  • Kagan, S. (2012). Death. Yale University Press.
    • This book is a survey of philosophical issues related to death, including, for our purposes, personal identity, and different criteria thereof, such as the soul, body, and mind. Kagan himself endorses the body criterion but believes persistence of personality is what matters in survival. This distinction, due to Parfit, has interesting implications for some of the scenarios the authors explore. With mind-uploading, for example, it may be the case that one dies, but this does not matter.
  • Karaman, F. (2021). Ethical issues in transhumanism. In Research anthology on emerging technologies and ethical implications in human enhancement (pp. 122-139). IGI Global. 
    • This chapter argues that transhumanism is unavoidable because technology has greater control over society than society over technology. It advocates for allocating academic resources to the preemptive discussion of the issues that society will be faced with once transhumanism appears. 
  • Kurzweil, R. (2005).* The singularity is near: When humans transcend biology. Viking. 
    • This book elaborates on exponential growth in science and technology, with a focus on the intersection of genetics, robotics, and nanotechnology. The author then anticipates how it will transform the human body, brain, and, more generally, our very way of life, on up to the mind-uploading scenario.
  • Lamola, M. J. (2021). The future of artificial intelligence, posthumanism and the inflection of Pixley Isaka Seme’s African humanism. AI & Society, 37(1), 131–141. https://doi.org/10.1007/s00146-021-01191-3 
    • This article observes that popular scholarly notions of transhumanism as well as ideas of what constitute the human body and soul stem from Euro-American conceptions and intellectual heritage. The author contrasts this ‘transhumanist programme’ with the ideas of twentieth-century Pan-Africanist thinker Pixley ka Isaka Seme, providing a critical alternative and exploring ideas of African humanism and technological philosophies.
  • Locke, J. (1997).* An essay concerning human understanding (R. Woolhouse, Ed.). Penguin Classics.
    • This essay is notable here for its chapter on personal identity. Locke presents a number of original thought experiments designed to test our intuitions about what we really are. He ultimately defends a psychological criterion of personal identity; in particular, psychological connectedness, with an emphasis on memory.
  • Mercer, C., & Trothen, T. J. (2021). Mind uploading: Cyber beings and digital immortality. In Religion and the technological future: An introduction to biohacking, artificial intelligence, and transhumanism (pp. 161-179). Springer International Publishing AG. 
    • Exploring the philosophical, practical, and religious aspects of ‘whole brain emulation’, the authors contribute to the transhumanist discussion by positing questions related to how this can be achieved, what notions of the ‘self’ would remain, and how conceptions of identity might change or be eliminated entirely. Abrahamic notions and beliefs of what constitutes a human body are considered, as well as the as of yet undetermined legal status of such entities. 
  • More, M., & Vita-More, N. (Eds.). (2013). The transhumanist reader: Classical and contemporary essays on the science, technology, and philosophy of the human future. Wiley-Blackwell.
    • This book covers a broad set of topics pertaining to transhumanism including the intelligent filtering of information, enhanced reality, and mind uploading. Additionally, it examines the technologies required to achieve these goals. The book ends with a discussion about whether transhuman enhancement should be a right, and the dangers it brings to the human species. 
  • Musk, E. (2019). An integrated brain-machine interface platform with thousands of channels. Journal of Medical Internet Research, 21(10), e16194. https://doi.org/10.2196/16194
    • This paper accompanies the creation of high-bandwidth brain machine interfaces (BMIs) by the company Neuralink. These devices serve as research platforms in rodents with the ultimate goal of being fully implantable in humans. Its highest priority applications are restoring motor functions to those with spinal cord injuries, and other immediate therapeutic uses, but augmenting human mental abilities with machine intelligence is also a possible avenue.
  • Nagel, T. (1979). Mortal questions. Cambridge University Press.
    • This book contains the classic essay, “What is it like to be a bat?” The author makes sense of the latter as ‘what it is like-ness’ and reflects on the hard problem of consciousness. 
  • Nietzsche, F. (2013). On the genealogy of morals: A polemic (M. A. Scarpitti, Trans.). Penguin Classics. 
    • Nieztsche provides another take on the view that the self is an illusion, or grammatical fiction. There are actions, but no agents.
  • Raisamo, R., et al. (2019). Human augmentation: Past, present and future. International Journal of Human-Computer Studies, 131, 131-143.
    • This paper reflects on how humans have augmented their abilities historically through basic technology such as eyeglasses to substances such as caffeine. It reflects on how the definition of human is slowly changing over time with respect to the first populations of the species. The authors specify three aspects of human experience that are augmentable: senses, action, and cognition. The paper discusses concerns about augmentation such as privacy, safety, and accessibility, as those without access to advanced augmentation will be at a significant disadvantage. 
  • Schneider, S. (2008). Future minds: Transhumanism, cognitive enhancement and the nature of persons. Neuroethics Publications. https://repository.upenn.edu/neuroethics_pubs/37/
    • This paper examines the philosophical implications of transhuman enhancements. Namely, if the person/entity at the end of the process is significantly different from the original person, whether they qualify for the same rights and treatment. The discussion regarding how to treat intelligent agents in general is of great ethical concern as humanity gets closer to creating strong artificial intelligence capable of feeling emotions, and the same holds true for future cyborgs.
  • Shi, Z., et al. (2016). Brain-machine collaboration for cyborg intelligence. In Z. Shi, S. Vadera, & G. Li (Eds.), International Conference on Intelligent Information Processing (pp. 256-266). Springer.
    • This paper considers collaboration between human and machine intelligence in a cyborg system based on two paradigms: environment awareness and motivation. It focuses on the latter, claiming that motivation is the cause of action and is important for collaboration. The authors offer an algorithm for structuring human and machine interactions based on recent state of the art machine learning methods. 
  • Sorgner, S. L. (2009). Nietzsche, the overhuman, and transhumanism. Journal of Evolution and Technology, 20(1), 29-42.
    • This paper claims that there are more similarities than initially recognized between the posthuman created by transhumanism and the overhuman concept introduced by Nietzsche. The author discusses how the overhuman is the result of the human aspiration for self-improvement and the desire to overcome one’s limitations. In many ways, the posthuman that results from transhumanism satisfies these roles.
  • Steinert, S., & Frieich, O. (2019). Wired emotions: Ethical issues of affective brain–computer interfaces. Science and Engineering Ethics, 26(1), 351–367. https://doi.org/10.1007/s11948-019-00087-2 
    • The authors examine the interactions between brain-computer interfaces (BCIs) and affective states (an area of exploration that can be considered a part of emotional AI), raising concerns about the monitoring and even the potential influencing of affective states (i.e., emotions). While the use of affective BCI is at this time is mainly limited to clinical settings, which generally have appropriate control measures in place, the authors caution that the applications for this technology will become incredibly more problematic once it is properly commercialized and marketable. They posit that, far from being transparent to its users, political and commercial actors will instead have dangerous potential to exert inappropriate and undue means of influence and manipulation.
  • Weinberger, S., & Greenbaum, D. (2016). Are BMI prosthetics uncontrollable Frankensteinian monsters? Brain Computer Interfaces, 3(3), 149–155. https://doi.org/10.1080/2326263X.2016.1207495 
    • This article discusses the legal and ethical issues that stem from brain-machine interfaces (BMIs), positing that the use of artificial intelligence to control these implants can seriously hamper investigations into legal cause and effect, undermining the cornerstone of determining criminal guilt. They observe that placement within the brain, while suited for effective motor control, is likely to be driven by preconscious rather than conscious thought, calling into question notions such as free will and agency. 
  • Wu, Z., et al. (2016). Cyborg intelligence: Recent progress and future directions. IEEE Intelligent Systems, 31(6), 44-50.
    • This paper gives practical advice regarding how to integrate human and machine intelligence by proposing frameworks to decode neural signals. It covers multimodal sensory integration, the cognitive cooperation and awareness of goals between the human mind and machine elements required for effective collaboration, and recent applications of brain signal decoding such as hand gesture recognition.

Chapter 17. Are Sentient AIs Persons? (Mark Kingwell)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.21

  • Al-Amoudi, I., & Lazega, E. (Eds.). (2019). Post-human institutions and organizations: Confronting the matrix. Routledge.
    • The book considers the place of Artificial Intelligence (AI) in society, including the household, health institutions, and the military. The authors consider the tradeoffs of digitizing and bureaucratizing the world and suggest questions such as AI personhood to develop an understanding of how societies will continue to change under the threat and optimism of AI enhancing social, political, and commercial processes. 
  • Anderson, S. L. (2016). Asimov’s “three laws of robotics” and machine metaethics. In S. Schneider (Ed.), Science fiction and philosophy: From time travel to superintelligence (2nd ed., pp. 290-307). Wiley-Blackwell. 
    • This chapter argues that treating intelligent robots like slaves under Asimov’s “three laws of robotics” would be misguided. Such entities could follow  ethical principles better than most humans and even warrant consideration as ethical advisors for moral standing or rights. The fact that humans feel the need to treat intelligent robots as slaves, reveals a weakness that makes it difficult for people to serve as exemplary ethical arbiters. Therefore, Asimov’s three laws of robotics are an unsatisfactory basis for machine ethics.
  • Aleksander, I. (2017). Partners of humans: A realistic assessment of the role of robots in the foreseeable future. Journal of Information Technology, 32(1), 1–9. https://doi.org/10.1057/s41265-016-0032-4
    • This paper sets out to review the actual level of competence being achieved in robotics research and the plausible impact that this is likely to have on human control over life. The author argues that cognition in machines and artificial forms of consciousness lead to operations in a set of tasks that are different from those that are available to truly cognitive and conscious human beings. Therefore, a major category error occurs in predictions of serious threats that Artificial Intelligence (AI) poses to humanity. 
  • Ashrafian, H. (2017). Can artificial intelligences suffer from mental illness? A philosophical matter to consider. Science and Engineering Ethics, 23(2), 403-412. https://doi.org/10.1007/s11948-016-9783-0 
    • The author suggests that existence of artificially intelligent psychopathology can be interpreted through philosophical perspectives of mental illness. The possibility that mental illness can occur in AI calls for the consideration that such entities exhibit the capacity to achieve consciousness, sentience, and rationality. Given this potential, the author argues that it is important to  consider the ‘mental disorders’ from which machines might suffer.  
  • Basl, J., & Sandler, R. (2013). The good of non-sentient entities: Organisms, artifacts, and synthetic biology. Studies in History and Philosophy of Biological and Biomedical Sciences, 44(4), 697-705. http://dx.doi.org/10.1016/j.shpsc.2013.05.017 
    • The authors employ an etiological account of teleology to demonstrate that teleological organization is sufficient for  synthetic organisms and non-sentient entities to exhibit a good of their own based upon the details of its goal-directedness. If such entities have a good of their own, they are candidates for being directly morally considerable. This lays the groundwork for a broader conception of moral standing or rights.
  • Bergenfalk, J. (2019). AI and human rights: An explorative analysis of upcoming challenges. Human Rights Studies. http://lup.lub.lu.se/student-papers/record/8966323 
    • This paper explores the challenges that AI systems present for current human rights perspectives , focusing on 4 topics: consciousness, rights and agency, bias and discrimination, and socio-economic rights. The author argues that current guidelines are inadequate to accommodate the changes brought by AI, particularly those related to efficiency and human imperfection.  
  • Birhane, A., & van Dijk, J. (2020). Robot rights? Let’s talk about human welfare instead. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 207-213). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375855 
    • The authors argue that robots are artifacts that function as mediators of human beings, and therefore should not be granted rights. Instead, the authors believe the current debate on ‘robot rights’ should focus on how less privileged communities can be exploited by machines, and the effect of this phenomenon on overall human-welfare. 
  • Bsheer, R. (2020). The limits of belonging in Saudi Arabia. International Journal of Middle East Studies, 52(4), 748–753. https://doi.org/10.1017/S002074382000104X
    • In this article, the author examines the conception and reaction to the world’s first robot citizen: the humanoid “Sophia” that was granted Saudi Arabian citizenship. The author observes that this instance of citizenship was toward the aim of the government’s agenda instead of that of the people.Sophia makes no demands like Saudi women calling for greater legal right—she is a new “breed” of citizen who is of the utmost goodness as she is ” obedient, politically passive, and economically productive”
  • Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press. 
    • The author defends the view that consciousness is irreducibly subjective. Of particular interest ,  the author supports the possibility of artificial general intelligence exhibiting consciousness awareness. They consider various objections including  Searle’s Chinese room argument that argues digital computers cannot have a mind or human-like consciousness despite how intelligently the machine may behave. 
  • Chalmers, D. (2016). The singularity: A philosophical analysis. In U. Awret (ed.), The Singularity: Could Artificial Intelligence Really Out-Think Us ? (pp. 12-88). Imprint Academic.
    • The author explores the role of humans in a world where singularity comes into fruition and raises questions about uploaded consciousness and personal identity. Singularity refers to an event where human intelligence is greatly surpassed by Artificial Intelligence (AI) through a recursive process in which intelligent machines design successor machines with ever-increasing levels of intelligence. The author considers the place of humans in a post-singularity world, with special attention to whether an uploaded human is conscious, and whether uploading can preserve personal identity. 
  • Colb, S. F., & Dorf, M. C. (2016). Beating hearts: Abortion and animal rights. Columbia University Press. https://doi.org/10.7312/colb17514
    • The authors focus on the applied ethics of discerning what rights are owed to sentient beings that do not fall into the category of “person”. Putting aside the necessity of proving personhood among other tests for the entitlement of rights, they conclude that sentience, which they define as the “ability to have subjective experiences” assumes a being’s entitlement to moral concern.
  • Deutsch, D. (2019). Beyond reward and punishment. In J. Brockman (Ed.), Possible minds: 25 ways of looking at AI (pp. 113-124). Penguin Press. 
    • The author argues that certain misconceptions about human thinking have led to misconceptions about machine thinking. The author demonstrates the inadequacy of Bayesian Updating Approaches to Artificial General Intelligence (AGI) and the need to better understand creativity.They support the idea that AGI or the ability of an intelligent agent to learn any intellectual task that a human being can do is achievable. Furthermore, such entities would be considered persons. 
  • Dick, P. K. (1968).* Do androids dream of electric sheep? Doubleday.  
    • A science fiction classic, following one bounty hunter’s pursuit of runaway androids. The novel raises philosophical issues, such as the possibility of empathic machines.  
  • Dragan, A. (2019). Putting the human into the AI equation. In J. Brockman (Ed.), Possible minds: 25 ways of looking at AI (pp. 134-142). Penguin Press. 
    • This paper highlights the importance of defining human-compatible Artificial Intelligence (AI) in the context of the coordination problem and the value-alignment problem. The author argues that our relationship with intelligent machines should go both ways; that is, robots must model people and people must model robots. 
  • Fowler, T. B. (2021). The limitations of artificial intelligence in light of Zubiri’s noology. Quaestio, 21, 233–258. https://doi.org/10.1484/j.quaestio.5.128394
    • Rapid advances in AI have led to questions about the ultimate capabilities of electronic devices and whether they will make humans obsolete at some time in the future. The author argues that the distinction between sensible intelligence and sentient intelligence is key to understanding the limitations of AI. AI can only operate as sensible intelligence-based devices, while sentient intelligence allows humans to carry out functions that sensible intelligence-based devices could never achieve. Therefore, sensible intelligence-based devices, like AI, will be restricted to amplifying human capabilities, but never replacing them. 
  • Freud, S. (2003).* The uncanny (D. McLintock, Trans.). Penguin Classics.  
    • Contains an essay by Freud of the same title, wherein he analyzes the concept of uncanniness. Freud discusses a number of uncanny motifs, such as the automaton. 
  • Gleiser, M. (2015). Welcome to your transhuman self. In J. Brockman (Ed.), What to think about machines that think: Today’s leading thinkers on the age of machine intelligence (pp. 54-55). Harper Perennial. 
    • This chapter reflects on the human-machine integration scenario. The author points out that this process of cyborgization is already underway, with cell phones and social media existing along the same spectrum as mechanical limbs and brain implants. 
  • Gilbert, M., & Martin, D. (2022). In search of the moral status of AI: Why sentience is a strong argument. AI & Society, 37, 319–330. https://doi.org/10.1007/s00146-021-01179-z
    • This paper explores different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. The authors support the proposal that sentience is a strong argument for the moral status of an AI system based on the Aristotelian principle of equality. However, no AI system is sentient given the current level of technological development.
  • Gunkel, D. J. (2019). No brainer: Why consciousness is neither a necessary nor sufficient condition for AI ethics. CEUR Workshop Proceedings, 2287, 9. http://ceur-ws.org/Vol-2287/ 
    • The author argues that the question of moral and legal status for AI should be more focused on the extrinsic social relationships, or ‘relational turns,’ instead of intrinsic, ontological properties such as sentience and consciousness.   
  • Harris, J., & Anthis, J. R. (2021). The moral consideration of artificial entities: A literature review. arXiv:2102.04215 
    • This paper contains a literature review of 294 relevant papers on the topic of whether robots deserve rights or any form of moral consideration. The authors find that the number of publications on this topic is growing exponentially, and most scholars view artificial entities as potentially warranting moral consideration. 
  • Hayward, T. (2005).* Constitutional environmental rights. Oxford University Press.  
    • This book makes the case for the human right to an adequate environment. This would be a right to nature, rather than a right of nature. One might consider a similar arrangement for some robots, or artificial intelligence systems, where rights concerning them are conceived as an extension of human rights.
  • Hildt, E. (2019). Artificial intelligence: Does consciousness matter? Frontiers in Psychology, 10, 1535. https://doi.org/10.3389/fpsyg.2019.01535
    • Unlike Colb and Dorf, this author argues personhood is a necessary condition for beings—which then are assumed to be sentient given they’ve achieved personhood—for the ascription of human rights and status. In this short opinion article, the author provides a descriptive primer on the different ways to test consciousness and suggests using these tools to establish “robothood,” which may grant “conscious” AIs a different set of rights and statuses than those of humans.
  • Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20, 291-301. https://doi.org/10.1007/s10676-018-9481-5 
    • The authors suggest that the analogies  drawn between animals and robots, in relation to how humans might think about interacting with  robots, are misleading. For example, the authors believe robots differ from animals since machines cannot experience suffering, which has implications for the moral status and rights of robots. 
  • Jowitt, J. (2021). Assessing contemporary legislative proposals for their compatibility with a natural law case for AI legal personhood. AI & Society, 36(2), 499–508. https://doi.org/10.1007/s00146-020-00979-z
    • The author argues mere agency suffices for the ascription of personhood for AI. This contribution suggests a lower threshold—compared to that of proving consciousness—needs to be met for AI to be granted legal protection. The author suggests sentient AI will become global citizens, for they will be granted rights irrespective of their origins or uses. 
  • Kant, I. (1993).* Grounding for the metaphysics of morals (3rd ed., J. W. Ellington, Trans.). Hackett Publishing Company. (Original work published 1785). 
    • A central work in deontological ethics, as well as moral philosophy and rights theory more generally. This work contains arguments for the dignity and sovereignty of all moral agents. 
  • Kim, M.-S., & Kim, E.-J. (2013). Humanoid robots as “the cultural other”: Are we able to love our creations? AI & Society, 28(3), 309-318.
    • This paper applies the concept of the “Cultural Other” to theories about advanced robots and AI. Special focus is placed on the social, cultural, and religious implications of humans’ attitudes toward relationships between humans with robots. The authors propose that love for all living and nonliving beings (including mechanical entities) may be the key to the co-evolution of both species and lead to the ultimate happiness. 
  • Kymlicka, W. (1995).* Multicultural citizenship: A liberal theory of minority rights. Oxford University Press.  
    • Liberal theory commonly construes rights as individualistic. The author argues that this tradition is compatible with a more collective understanding of them. These might concern language rights, group representation, or religious education – not at the level of particular people, but entire identities. Notice that, as with rights attributed to animals, or the environment, this is a case where the bearer of rights is unable to explicitly claim them, which may also apply to some artefacts and robots. 
  • Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. 
    • In this book, the author employs practical questions such as the use of animals in warfare to challenge Kant’s view that our obligations to non-human animals are indirect. The author argues all sentient creatures have a good and, in a sense, warrant treatment as ends-in-themselves, thus suggesting they are entitled to moral reciprocity and that their lives are to be respected. This account, which draws on Aristotelian and Kantian thought, is extended to suggest the same respect and standards paid toward animals are supported for conscious machines. 
  • Lavelle, S. (2020). The machine with a human face: From artificial intelligence to artificial sentience. In S. Dupuy-Chessa & H. Proper (Eds.), International Conference on Advanced Information Systems Engineering (pp. 63-75). Springer. https://doi.org/10.1007/978-3-030-49165-9_6 
    • The author argues that given the evolution of AI, the definition of ‘artificial intelligence’ is transforming to resemble ‘artificial sentience.’ However, the author argues that the traditional ‘Turing Test’ is an insufficient method of measuring this progress, and new tests need to be developed with conditions that can satisfy the concept of a humaniter: “an artificial creature that thinks, acts and feels like a human to the point that one cannot make the difference between the two.” The author considers the limits of other tests, like sentience tests, which unlike the humaniter could not truly act as a human and think as one to enhance human-machine relations.
  • Lima, G., et al. (2020). Collecting the public perception of AI and robot rights. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 135. https://doi.org/10.1145/3415206. 
    • The authors explore public perception of granting rights to robots, using the findings from a large online experiment with 1270 participants. The authors find that while users are against robot rights, they are supportive of preventing ‘electronic cruelty.’ In addition, the authors find that how AI is presented to users influences how positively they perceive their relationship with AI systems. 
  • Locke, J. (1980).* Second treatise of government (C. B. Macpherson, Ed.). Hackett Publishing Company. 
    • A canonical source on the social contract and natural rights, which may influence how we think about their application to artificial intelligence. Pivotal in the development of liberal norms, the text defends a basis for personal freedom and private property, as well as ownership of one’s body and labor. 
  • Merleau-Ponty, M. (2012). Phenomenology of perception (D. A. Landes, Trans.). Routledge.  
    • A text in the tradition of French existentialism, Merleau-Ponty elaborates on the primacy of perception. His discussion includes the topic of embodied phenomenology, which has influenced subsequent thinking about embodied cognition, and its relevance to artificial intelligence. Ultimately, Marleau-Ponty concludes the body is not as a mere biological site or physical unit, but as a deterministic medium that diagnoses one interpretation of their ’lived reality.
  • Mosakas, K. (2020). On the moral status of social robots: Considering the consciousness criterion. AI & Society, 35(4), 1-15. https://doi.org/10.1007/s00146-020-01002-1 
    • The author argues that AI do not deserve moral consideration because they lack the ‘consciousness’ criterion. The author defends this argument through a set of definitions for ‘consciousness’ which they believe that AI systems will not achieve.  
  • Nyholm, S., & Smids, J. (2020). Can a robot be a good colleague? Science and Engineering Ethics, 26(4), 2169–2188. https://doi-org/10.1007/s11948-019-00172-6 
    • In this paper, the authors explore the unique ethical implications of the concept of robots working as colleagues, and how this relationship is easier to establish compared to friendships and romantic partnerships. Given a general disinterest in the inner lives of colleagues,the authors find that robots are likely to live up to the many conditions associated with being good colleagues capable of establishing meaningful relationships.
  • Pinker, S. (2015). Thinking does not imply subjugating. In J. Brockman (Ed.), What to think about machines that think: Today’s leading thinkers on the age of machine intelligence (pp. 5-8). Harper Perennial. 
    • The author explains how a naturalistic, computational theory of reason opens the door to thinking machines. However, our fear of this prospect is unfounded, insofar as it stems from the projection of a malevolent, domineering psychology onto the very concept of intelligence. 
  • Saavedra-Rivano, N. (2020). Mankind at a crossroads: The future of our relation with AI entities. International Journal of Software Science and Computational Intelligence, 12(3), 28-37. https://doi.org/10.4018/IJSSCI.2020070103 
    • The author examines the impact of artificial sentient systems on mankind and argues that while the short-term prospect may be positive, in the long-term this technology will only benefit the ‘privileged minority’ in becoming ‘superhumans.’ The paper also explores policy measures that can be taken to prevent this from happening.  
  • Robertson, G. (2013).* Crimes against humanity: The struggle for global justice (4th ed.). New Press.  
    • Includes numerous examples of contemporary crimes against humanity. Relevant here for the distinction between these and war crimes.  
  • Scanlon, T. M. (1998). What we owe to each other. Belknap Press. 
    • This book presents a modern form of contractualism. In its first part, the author argues for fundamentalism, as well as against consequentialism and hedonism. In the second part of the book, he provides an account of wrongness as that which one could reasonably reject. The author suggests entities that cannot speak for themselves may nevertheless be accommodated by his system through advocates. Humans could, perhaps, assume the role of trustee to represent the interests of machines. 
  • Searle, J. R. (1980).* Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417-457. http://dx.doi.org/10.1017/S0140525X00005756  
    • Includes Searle’s Chinese room argument, the upshot of which is that programs run by digital computers cannot be shown to possess understanding, or consciousness. The argument opposes functionalism and computationalism in philosophy of mind, as well as the possibility of artificial general intelligence. 
  • Shelley, M. (2013).* Frankenstein; or, The modern prometheus (M. Hindle, Ed.). Penguin Classics. (Original work published 1818)
    • A gothic horror and science fiction classic, Frankenstein depicts a scientist by that same name, who succeeds in creating intelligent life. 
  • Singer, P. (2009).* Animal liberation (Updated edition). HarperCollins Publishers.  
    • A major contribution to the animal liberation movement. Singer’s argument for the equality of animals rests not on some conception of rights, but a preference utilitarian perspective. Exemplifies the theme of our expanding moral circle, and how it may grow to include conscious machines. 
  • Stamos, D. N. (2016). The myth of universal human rights: Its origin, history, and explanation, along with a more humane way. Routledge. 
    • The author argues the idea of universal human rights is a myth, pointing to a new framework he developed to evaluate the moral worth of human beings. 
  • Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Alfred A. Knopf. 
    • Covers the topics of intelligence, goal-directedness, and the future of artificial intelligence. The author proposes a theory of consciousness according to which subjective experience is a matter of information being processed in a particular kind of way. He places this in the context of a broadly utilitarian ethic, which ascribes moral standing to conscious machines. 
  • United Nations. (1948, December 10).* Universal declaration of human rights. https://www.un.org/en/universal-declaration-human-rights 
    • A significant 20th century document on the establishment of universal human rights. Its 30 articles were adopted under United Nations Resolution 217 in Paris, on December 10th, 1948.

Chapter 18. Autonomy (Michael Wheeler)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.22

  • Aguirre, A. (2020). Why those who care about catastrophic and existential risk should care about autonomous weapons. LessWrong. https://www.lesswrong.com/posts/Btrmh6T62tB4g9RMc/why-those-who-care-about-catastrophic-and-existential-risk 
    • This paper makes the case that the risks posed by autonomous weapons systems deserve more attention from the communities that work on studying and avoiding catastrophic and existential risks – such as the Future of Life Institute. After providing a classification of autonomous weapons, the author posits that lethal autonomous weapons systems are an “early test for artificial general intelligence safety, arms race avoidance, value alignment and governance”, and that lethal autonomous weapons could be considered weapons of mass destruction. The paper ends with a set of steps that can be taken to mitigate risks posed by autonomous weapons, such as providing an unambiguous description of autonomous weapons that should be prohibited, and engaging in agreements regarding the “proliferation, tracking, attribution, human control” of those that are not banned.  
  • Allen, C., et al. (2000).* Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251-261.
    • This paper surveys ethical disputes, the possibility of a ‘moral Turing Test’ is considered, and the computational difficulties accompanying the different types of approach are assessed. Human-like performance, which is prone to include immoral actions, may not be acceptable in machines, but moral perfection may be computationally unattainable. The risks posed by autonomous machines ignorantly or deliberately harming people and other sentient beings are great. The development of machines with enough intelligence to assess the effects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of artificially intelligent automata.
  • Arkin, R. C. (2010).* The case for ethical autonomy in unmanned systems. Journal of Military Ethics, 9(4), 332-341.
    • The underlying thesis of the research in ethical autonomy for lethal autonomous unmanned systems is that they will potentially be capable of performing more ethically on the battlefield than are human soldiers. In this article this hypothesis is supported by ongoing and foreseen technological advances and an assessment of the fundamental ability of human war fighters in today’s battlespace. If this goal of better-than-human performance is achieved, even if still imperfect, it can result in a reduction in non-combatant casualties and property damage consistent with adherence to the Laws of War as prescribed in international treaties and conventions and is thus worth pursuing vigorously.
  • Asaro, P. (2008).* How just could a robot war be? In P. Brey, A. Briggle, & K. Waelbers (Eds.), Current issues in computing and philosophy (pp. 50-64). Ios Press. 
    • This paper considers the fundamental issues of justice involved in the application of autonomous and semi-autonomous robots in warfare, beginning with an analysis of how robots may fit into the framework of just war theory. It considers how robots, “smart” bombs, and other autonomous technologies might challenge the principles of just war theory, and how international law might be designed to regulate them. It concludes that deep contradictions arise in the principles intended to govern warfare and our intuitions regarding the application of autonomous technologies to war fighting.
  • Awad, E., et al. (2018).* The moral machine experiment. Nature, 563(7729), 59–64.
    • To address the challenge of quantifying societal expectations of ethical principles that should guide machine behavior, the authors deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. Here, the authors describe the results of this experiment. The paper summarizes global moral preferences; documents individual variations in preferences, based on respondents’ demographics; and reports cross-cultural ethical variation, uncovering three major clusters of countries. Finally, the authors argue that these differences correlate with modern institutions and deep cultural traits.
  • Aydin, C. (2021). Extimate technology: Self-formation in a technological world. Routledge.
    • This book asks how we ought to understand ourselves in a world where emerging technologies shape our identities and opportunities. To answer this question, the author deconstructs the inside-outside dualism that has often characterized theorizations of the human-technology relationship. Drawing on philosophers such as Nietzsche, Peirce, and Lacan, the author argues that we should understand technological self-formation as a form of sublimation, formalized under the label of Technological Sublimation Theory. 
  • Beer, D. (2017). The social power of algorithms. Information, Communication, & Society, 20(1), 1-13. 
    • This article approaches algorithms from a social science perspective. The author begins by analyzing the social power of algorithms themselves, exploring the functionality of algorithms and how they are deployed. The article then engages the idea of algorithms more broadly, arguing that the notion of algorithms invokes broader rationalities that are an important part of their social power. To that end, investigating the idea of the algorithm and its part in a broader social visions can enable researchers to understand the relationship between algorithms and objectivity, as well as the wider governmentalities with which algorithms are involved. 
  • Boden, M. A. (1996).* Autonomy and artificiality. In M. A. Boden (Ed.), The philosophy of artificial life (pp. 95-107). Oxford University Press.
    • This new volume in the acclaimed Oxford Readings in Philosophy series offers a selection of the most important philosophical work being done in the new and fast-growing interdisciplinary area of artificial life. Artificial life research seeks to synthesize the characteristics of life by artificial means, particularly employing computer technology. The essays here explore such themes as the nature of life, the relation between life and mind, and the limits of technology.
  • Boden, M. A. (2016). AI: Its nature and future. Oxford University Press.
    • This book describes how research in artificial intelligence has provided fruitful results in robotics and theoretical biology and covers the history of the increasingly specialized field of AI, highlighting its successes and looking towards its future. Finally, it argues that AI has been valuable in helping to understand the mental processes of memory, learning, and language for living creatures. 
  • Bostrom, N. (2014).* Superintelligence: Paths, dangers, strategies. Oxford University Press.  
    • This book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.
  • Calvo, R. A., et al. (2020). Supporting human autonomy in AI systems: A framework for ethical enquiry. In Ethics of digital well-being (pp. 31-54). Springer.
    • This paper is concerned with how design and development practices of AI systems can serve to protect and foster human autonomy. It proposes a model (named “METUX”) that identifies different “spheres of technology experience” and applies this model on a real-world case study of AI-enhanced video recommender system. Some of the central take-aways that emerge from applying this model are (a) that third party interests have autonomy-related consequences that aren’t impartial, (b) design for (human) autonomy is “an ethical imperative”, and (c) any autonomy analysis must at least capture the different spheres of technology experience investigated in the METUX model to be sufficiently comprehensive. 
  • Chandler, D., & Fuchs, C. (2019). Digital objects digital subjects: Interdisciplinary perspectives on capitalism, labor, and politics in the age of Big Data. University of Westminster Press. 
    • This dialogic anthology engages critically with Big Data capitalism. It traces points of contention regarding digital technologies and the forms of labor and agency that are facilitated by their capitalistic modalities. Highlighting the potentialities and pitfalls of digital activism, the book asks whether ubiquitous datafication and surrounding data discourses lead to problematic forms of digital positivism or new possibilities in both theory and practice.
  • Cugurullo, F. (2020). Urban artificial intelligence: From automation to autonomy in the smart city. Frontiers in Sustainable Cities, 1(1), 1-14.
    • This article argues that innovations in artificial intelligence are turning cities into ‘autonomous urban creature[s]’ that have remained largely undertheorized. Developing a framework specifically for engaging urban artificial intelligence, the author explores the ways in which AI is overtaking the management of several city services and enabling a broader transition from automation to autonomy. Drawing on various projects underway in Masdar City, the author unpacks the broader politico-economic agenda in which this transition towards autonomous AI is taking place. The article concludes by proposing a research agenda for future investigations into this so-called autonomous city. 
  • Dennett, D. C. (1984).* Elbow room: The varieties of free will worth wanting. MIT Press.
    • This book argues that classical formulations of the free will problem in philosophy depend on misuses of imagination, and the author disentangles the philosophical problems of real interest from the “family of anxieties” they get enmeshed in – imaginary agents, bogeymen, and dire prospects that seem to threaten our freedom. The author examines the problem of how anyone can ever be guilty, and what the rationale is for holding people responsible and even, on occasion, punishing them.
  • Gill, M. L., & Lennox, J. G. (2017). Self-motion: From Aristotle to Newton. Princeton University Press.
    • This book contains a collection of essays on the historical development of the concept of self-motion. The authors’ discussion of the existence of self-movers and the qualities of self-motion includes perspectives from classical, Hellenistic, medieval, and early modern scholars in philosophy and science. The implications of arguments surrounding self-motion are fundamental to many theories on agency and autonomy across relevant disciplines in philosophy, science, technology, law, and society.
  • Gunkel, D. J. (2017). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22. https://doi.org/10.1007/s10676-017-9428-2
    • This essay responds to the question concerning robots and responsibility, by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. The essay considers three instances where recent innovations in robotics challenge this standard operating procedure by opening gaps in the usual way of assigning responsibility. Finally, the essay concludes by evaluating the three different responses—instrumentalism 2.0, machine ethics, and hybrid responsibility—that have been made in face of these difficulties in an effort to map out the opportunities and challenges of and for responsible robotics.
  • Guo, X., et al. (2014). Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. Advances in Neural Information Processing Systems, 4, 3338–3346.
    • This paper introduces an algorithm for playing Atari games. The authors’ approach involves an agent estimating the value of a possible action by running several simulations. The algorithm describes how the agent can efficiently use the results of the simulation to adjust the policy for choosing actions. Once trained, these programs can play in real-time with only their learned parameters, providing important lessons for the way we think about the agency and autonomy of human and nonhuman actors.
  • He, H., et al. (2021). The challenges and opportunities of human-centered AI for trustworthy robots and autonomous systems. IEEE Transactions on Cognitive and Developmental Systems.
    • This paper investigates the “key facts of human centered AI (HAI) for trustworthy Robots and Autonomous Systems”. Specifically, five key properties that trustworthy autonomous systems should possess are discussed, including requirements on cyber-security, effective human-machine interaction, and robust handling of uncertainty and dynamic surroundings. This discussion is followed by an analysis of the challenges in implementing trustworthy autonomous systems, focusing on the five properties discussed above. Lastly, a new acceptance model of autonomous systems is provided to facilitate the implementation of trustworthy systems by design. 
  • Heyns, C. (2017).* Autonomous weapons in armed conflict and the right to a dignified life: An African perspective. South African Journal on Human Rights, 33(1), 46-71.
    • This article argues that the question that will haunt the future debate over autonomous weapons is: what if technology develops to the point where fully autonomous weapons surpass human targeting, and can potentially save many lives? Would human rights considerations in such a case not militate for the use of autonomous weapons, instead of against it? This article argues that the rights to life and dignity demand that even under such circumstances, full autonomy in force delivery should not be allowed. The article emphasizes the importance placed on the concept of a ‘dignified life’ in the African human rights system.
  • Kang, M. (2011). Sublime dreams of living machines: The automaton in the European imagination. Harvard University Press.
    • This book gives a detailed history of Western thought on automation by examining developments in intellectual, cultural, and artistic expressions of automata. The author argues for a distinction from ancient conceptions of animated objects and outlines the development of mechanistic philosophy. The book describes the influence of automata across disciplines through its appearance in works such as Descartes’ model of biological mechanism and Hobbes’s Leviathan to more modern developments and influences. 
  • Kitchin, R., & Fraser, A. (2020). Slow computing: Why we need balanced digital lives. Bristol University Press. 
    • This book draws attention to the ways new and deterministic technologies seem to be accelerating public and private life, both impacting our well-being and allocating resources and opportunities. As the limitations and inequalities of these technologies become more apparent, the authors ask whether it is possible to enjoy the benefits of contemporary computing while protecting individual and collective autonomy. The book makes several recommendations for resisting the dangers of new technologies and taking back control of our digital lives, drawing on the ideas and vocabularies of the existing ‘slow movement.’
  • Lazzarato, M. (2014). Signs and machines: Capitalism and the production of subjectivity. Semiotext(e). 
    • This book argues that language and public space remain fundamentally connected under the conditions of contemporary capitalism. It highlights the semiotic ‘motors’ that fuel capitalism’s social and technical operations, producing networks of subjection and enslavement within social spaces. Suggesting that this production of subjectivities represents capitalism’s most important work, the author explores the necessary conditions for a moment of political rupture and resistance, particularly the types of organizations and collectives that must be constructed to facilitate this work.
  • Mindell, D. A. (2015).* Our robots, ourselves: Robotics and the myths of autonomy. Penguin.
    • This book argues that the stark lines we’ve drawn between human and not human, manual and automated, are not helpful for understanding our relationship with robotics. The book clarifies misconceptions about the autonomous robot, offering instead a hopeful message about what the author calls “rich human presence” at the center of the technological landscape we are now creating.
  • Mnih, V., et al. (2015).* Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533.
    • In this paper, the authors use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. The research demonstrates that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
  • Niker, F., et al. (2018).* Updating ourselves: Synthesizing philosophical and neurobiological perspectives on incorporating new information into our worldview. Neuroethics, 11(3), 273-282.
    • This paper argues of the importance to theories of autonomous agency of the capacity to appropriately adapt our values and beliefs, in light of relevant experiences and evidence, to changing circumstances. It presents a plausible philosophical account of this process, which is generally applicable to theories about the nature of autonomy, both internalist and externalist alike. The paper then evaluates this account by providing a model for how the incorporation of values might occur in the brain; one that is inspired by recent theoretical and empirical advances in our understanding of the neural processes by which our beliefs are updated by new information. 
  • Lin, P. (2016).* Why ethics matters for autonomous cars. In M. Maurer, C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomous driving: Technical, legal and social aspects (pp. 69-85). Springer.
    • This chapter explains why ethics matters for autonomous road vehicles, looking at the most urgent area of their programming. The chapter acknowledges that as nearly all of this work is still in front of the industry, the questions raised do not have any definitive answers at such an early stage of the technology.
  • Rault, R., & Trentesaux, D. (2018). Artificial intelligence, autonomous systems and robotics: Legal innovations. In Service orientation in holonic and multi-agent manufacturing (pp. 1-9). Studies in Computational Intelligence, 762. Springer.
    • This paper focuses on the legal aspects of the development of artificial intelligence systems with applications in “embedded autonomous systems, cyber-physical systems and self-organizing systems”. It specifically focuses on how a lawyer could “apprehend” AI, as well as the existing and future legal innovations to handle issues related to autonomous AI systems. 
  • Rose, G. (2017). Posthuman agency in the digitally mediated city: Exteriorization, individuation, reinvention. Annals of the Association of American Geographers, 107(4), 779-793.
    • This article engages with contemporary accounts of the relationship between urban space and digital technology, tracing the connections between this emergent literature and posthuman philosophy. To the extent that this literature emphasizes the production of urban space by software and hardware, the author suggests that this critical scholarship has left the agency of human beings undertheorized and imagined agency in terms of an agential resistance to the influence of specific technologies. In contrast, the article draws on the philosophy of Bernard Stiegler to imagine a specifically posthuman agency, one that is both co-constituted with technologies and diverse, simultaneously individuated and exteriorized.
  • Rusu, A. A., et al. (2016).* Progressive neural networks. arXiv:1606.04671.
    • Learning to solve complex sequences of tasks–while both leveraging transfer and avoiding catastrophic forgetting–remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. The paper evaluates this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games) and shows that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, the paper asserts that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
  • Sawyer, B. D., et al. (2021). Human factors and ergonomics in design of A3 : Automation, autonomy and artificial. In G, Salvendy & W. Karwowski (Eds.) Handbook of human factors and ergonomics, 1385-1416.
    • This book chapter focuses on the human factors and ergonomics principles in the design of A3: Automation, Autonomy and Artificial Intelligence. It argues that the “advent of autonomy which has no need for humans is not unlikely, but likely undesirable”, and proceeds to outline tools from human factors and ergonomics literature to inform the design of A3 systems to maximize the benefit they might bring to humans. 
  • Santoni de Sio, F., & Van den Hoven, J. (2018).* Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5(15). https://doi.org/10.3389/frobt.2018.00015
    • This paper lays the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design,” the paper’s account of meaningful human control is cast in the form of design requirements. It identifies two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation.
  • Sharkey, A. (2019).* Autonomous weapons systems, killer robots and human dignity. Ethics and Information Technology, 21(2), 75-87.
    • This paper critically examines the relationship between human dignity and Autonomous Weapon Systems (AWS). Three main types of objections to AWS are identified; (1) arguments based on technology and the ability of AWS to conform to international humanitarian law; (2) deontological arguments based on the need for human judgment and meaningful human control, including arguments based on human dignity; (3) consequentialist reasons about their effects on global stability and the likelihood of going to war. An account is provided of the claims made about human dignity and AWS, of the criticisms of these claims, and of the several meanings of ‘dignity’. It is concluded that although there are several ways in which AWS can be said to be against human dignity, they are not unique in this respect.
  • Sharkey, N. (2012).* Killing made easy: From joysticks to politics. In P. Lin, G. Bekey, & K. Abney (Eds.), Robot Ethics: The ethical and social implications of robotics (pp. 111-128). MIT Press. 
    • This chapter provides an overview of novel war technologies, which make killing at a distance easier than ever before. The author argues that the current ethical guidelines the United States government has adopted do not sufficiently address the ethical concerns raised by such technologies. Furthermore, the chapter argues that international ethical guidelines for fully autonomous killer robots are urgently needed. 
  • Sharkey, N. (2009).* Death strikes from the sky: The calculus of proportionality. IEEE Technology and Society Magazine, 28(1), 16-19.
    • The use of unmanned aerial vehicles (UAVs) in the conflict zones of Iraq and Afghanistan for both intelligence gathering and “decapitation” attacks has been heralded as an unprecedented success by U.S. military forces. This article argues that there is a danger of over-trusting and overreaching the technology, particularly with respect to protecting innocents in war zones; there are ethical issues and pitfalls. The article argues that it is time to reassess the meanings of discrimination and proportionality in the deployment of UAVs in 21st century warfare.
  • Silver, D., et al. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354–359. https://doi.org/10.1038/nature24270
    • This paper introduces an algorithm for learning to play Go called AlphaGo Zero. The method builds on authors’ previously published AlphaGo program that was trained by a combination of self-play and domain-expert supervised human gameplay data. The primary distinction is that AlphaGo Zero requires only self-play to learn value and policy model parameters. This reinforcement learning algorithm requires no human gameplay data or strategy to assist model training. AlphaGo Zero, given only the rules of the game, achieved 100-0 against the previously published AlphaGo program.
  • de Solla Price, D. J. (1964). Automata and the origins of mechanism and mechanistic philosophy. Technology and Culture, 5(1), 9–23. https://doi.org/10.2307/3101119
    • This essay describes the development of mechanistic philosophy and its relationship with automata. The essay discusses whether contemporary development of artificial automata motivated the growth of mechanistic philosophy. The author argues that the technological developments in mechanical devices, scientific theory, and mechanistic philosophy are part of a proposed intellectual tradition concerning automata. The essay describes historical ideas about automata and connects them to the origins of mechanistic philosophy. 
  • Sparrow, R. (2007). * Killer robots. Journal of Applied Philosophy, 24(1), 62-77.
    • This paper considers the ethics of the decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime, arguing that no current answer to this question is ultimately satisfactory. The paper argues that it is a necessary condition for fighting a just war, under the principle of jus in bellum, that someone can be justly held responsible for deaths that occur during the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system, it would therefore be unethical to deploy such systems in warfare.
  • Szegedy, C., et al. (2013).* Intriguing properties of neural networks. arXiv:1312.6199.
    • This paper reports two counterintuitive properties of deep neural networks. First, the authors find that there is no distinction between individual high-level units and random linear combinations of high-level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains the semantic information in the high layers of neural networks. Second, the authors find that deep neural networks learn input-output mappings that are discontinuous to a significant extent. 
  • Vamplew, P., et al. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27-40.
    • This article argues that ethical frameworks for AI which consider multiple potentially conflicting factors can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multi-objective decision-making. The article argues that a multi-objective MEU paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. The article examines existing approaches to multi-objective AI and identifies how these can contribute to the development of human-aligned intelligent agents.
  • Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine Learning, 8(3–4), 279–292.
    • This paper introduces a foundational algorithm and concepts to reinforcement learning. Their framework considers agents in a state that can take actions to transition to other states. The Q-learning algorithm describes how agents assign a numerical reward score to potential actions and can learn to take actions that maximize expected reward. This is called a model-free algorithm as the agent does not require a model of the state—in this algorithm, the agent only needs to observe the state.
  • Yudkowsky, E. (2006).* Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. M. Cirkovic (Eds.), Global catastrophic risks (pp. 308–345). Oxford University Press.
    • This paper argues that the greatest danger of artificial intelligence is that individuals have a false understanding of it. Specifically, the paper argues that our tendency to anthropomorphize AI limits truly understanding it.

Chapter 19. Troubleshooting AI and Consent (Meg Leta Jones and Elizabeth Edenberg)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.23

  • Andreotta, A. J., et al. (2021). AI, big data, and the future of consent. AI & Society, 1–14. https://doi.org/10.1007/s00146-021-01262-5                                            
    • This paper discusses the current issues regarding informed digital consent and proposes the use of ‘soft governance’ on the commercial usage of personal data. In this way, the collection of data by tech companies would be subject to ethical review in the same way that other research, such as in the medical field, is. The authors further suggest the usage of consent forms created following the idea of pictorial legal contracts, which are more informative and accessible than their lengthy and confusing text counterparts.
  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. John Wiley & Sons.
    • This book argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the “New Jim Code,” the author examines how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. The author makes the case for race itself as a kind of technology, designed to stratify and sanctify social injustice in the architecture of everyday life.
  • Bostrom, N. (2014).* Superintelligence: Paths, dangers, strategies. Oxford University Press.  
    • The author argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.
  • Brayne, S. (2017). Big data surveillance: The case of policing. American sociological review, 82(5), 977-1008. https://doi.org/10.1177%2F0003122417725865
    • This article examines the intersection of two structural developments: the growth of surveillance and the rise of “big data.” Drawing on observations and interviews conducted within the Los Angeles Police Department, the author offers an empirical account of how the adoption of big data analytics does—and does not—transform police surveillance practices. The author argues that the adoption of big data analytics facilitates amplifications of prior surveillance practices and fundamental transformations in surveillance activities.
  • Breen, S., et al. (2020). GDPR: Is your consent valid? Business Information Review, 37(1), 19-24. https://doi.org/10.1177%2F0266382120903254
    • This article explores the philosophical background of consent and examines the circumstances which were the point of departure for the debate on consent and attempts to develop an understanding of it in the context of the growing influence of information systems and the data-driven economy. The author argues that the General Data Protection Regulation (GDPR) has gone further than any other regulation or law to date in developing an understanding of consent to address personal data and privacy concerns.
  • Bridges, K. M. (2017).* The poverty of privacy rights. Stanford University Press.
    • This book argues that poor mothers in America have been deprived of the right to privacy. Presenting a holistic view of how the state intervenes in all facets of poor mothers’ privacy, the author argues that the Constitution has not been interpreted to bestow these women with family, informational, and reproductive privacy rights. The author further argues that until cultural narratives that equate poverty with immorality are disrupted, poor mothers will continue to be denied this right.
  • Broussard, M. (2018).* Artificial unintelligence: How computers misunderstand the world. MIT Press.
    • Making a case against technochauvinism―the belief that technology is always the solution―this book argues that social problems will not inevitably retreat before a digitally enabled Utopia. The author argues that understanding the fundamental limits of technological capabilities will help the public to make better ethical choices concerning its implementation. 
  • Browne, S. (2015).* Dark matters: On the surveillance of blackness. Duke University Press.
    • This book argues that contemporary surveillance technologies and practices are informed by the long history of racial formation and by the methods of policing black life under slavery, such as branding, runaway slave notices, and lantern laws. Placing surveillance studies into conversation with the archive of transatlantic slavery and its afterlife, the author draws from black feminist theory, sociology, and cultural studies. The author asserts that surveillance is both a discursive and material practice that reifies boundaries, borders, and bodies around racial lines, so much so that the surveillance of blackness has long been, and continues to be, a social and political norm. 
  • Casonato, C. (2021). AI and constitutionalism: The challenges ahead. In B. Braunschweig & M. Ghallab (Eds.), Reflections on artificial intelligence for humanity (pp. 127-149). Springer. https://doi.org/10.1007/978-3-030-69128-8_9
    • This chapter promotes a human-centered approach to AI through a lens of constitutionalism. The author considers AI decision-making within contexts such as democracy, human rights, big data, and privacy. The author proposes a set of new human rights as a constitutionally based human-centered framework for AI.
  • Cohen, I. G., et al. (2014). The legal and ethical concerns that arise from using complex predictive analytics in health care. Health Affairs (Project Hope), 33(7), 1139–1147. 
    • The authors review the legal and ethical challenges associated with implementing predictive analytics in health care, and offer suggestions for overcoming those challenges in four distinct phases the life cycle of a model: acquiring data to build the model (consent and privacy; equitable representation), building and validating a model (patient-centred perspectives; developing standards for validation and transparency; validating model outcomes), testing it in real-world settings (consent, liability, choice architecture), and disseminating and using it more broadly (equitable access; imperfect implementation; the role of the physician).
  • Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.
    • This book discusses contemporary capitalism and its basis in data colonialism, drawing links between the colonial treatment of land and natural resources, and the current treatment of personal data by corporations. The authors theorize this complex form of data colonialism and turn the conversation to the future, discussing options for resistance.
  • Ferguson, A. G. (2017).* The rise of big data policing: Surveillance, race, and the future of law enforcement. NYU Press.
    • This book provides an overview of new technologies used in policing and argues for increased public awareness of the consequences of big data surveillance as a law enforcement tool. The author argues that technologies may distort constitutional protections but may also improve police accountability and remedy underlying socio-economic risk factors that encourage crime. 
  • Fotopoulou, A. (2020). Conceptualising critical data literacies for civil society organisations: Agency, care, and social responsibility. Information, Communication & Society. https://doi.org/10.1080/1369118X.2020.1716041
    • This article explores data literacy and the debate surrounding its conceptualization. It advances this debate through questioning the usefulness of the concept. The author highlights the necessity for models and frameworks that promote data literacy in the public and civil society organizations. 
  • Grigorovich, A., & Kontos, P. (2020). Towards responsible implementation of monitoring technologies in institutional care. The Gerontologist, 60(7), 1194-1201. https://doi.org/10.1093/geront/gnz190
    • This paper discusses the implications of the influx of monitoring technologies in institutional care settings. The positive assumptions and sudden push for the integration of these technologies results in gaps in current knowledge and literature, such as the blurred understandings of consent. This review of current scholarship on monitoring technologies notes weak evidence of actual improvements and indications of unforeseen risks. The authors call for a more rigorous understanding of this technology, and evidence of its risks and benefits in the medical setting.
  • Giannopoulou, A. (2020). Algorithmic systems: The consent is in the detail? Internet Policy Review, 9(1).
    • This article examines the transformation of consent in order to assess how the concept and its applications can be reconciled to correspond not only to current data protection normative frameworks but also to algorithmic processing technologies. Safeguarding individual control over personal data in the algorithmic era is interlinked with practical implementations of consent in technology usage. Moreover, it relates to adopted interpretations of the concept of consent, to the scope of application of personal data, as well as to the obligations enshrined in them.
  • Hibbin, R. A., et al. (2018). From “a fair game” to “a form of covert research”: Research ethics committee members’ differing notions of consent and potential risk to participants within social media research. Journal of Empirical Research on Human Research Ethics, 13(2), 149-159. https://doi.org/10.1177/1556264617751510
    • This document looks at research ethics committees’ (REC) approaches to social media in terms of balancing ethical principles and public availability. Focusing on REC members from the United Kingdom, the authors investigate the challenges surrounding risk and consent that social media poses. The authors conclude that these challenges are actively considered by REC members and that their approaches to social media vary based on level of experience. 
  • Jesus, V. (2020). Towards an accountable web of personal information: The web-of-receipts. Institute of Electrical and Electronics Engineers Access, 8, 25383-25394.
    • This paper reviews the current state of consent and ties it to a problem of accountability. The author argues for a different approach to how the Web of Personal Information operates: the need for an accountable Web in the form of Personal Data Receipts which are able to protect both individuals and organisations. 
  • Human, S., & Cech, F. (2020). A human-centric perspective on digital consenting: The case of GAFAM. In Human Centred Intelligent Systems (pp. 139–159). Springer Singapore. https://doi.org/10.1007/978-981-15-5784-2_12
    • This paper uses the cognitive science idea of enactivism to develop a human-centred framework to digital consent. The framework considers how consent has three dimensions — cognitive, collective, and contextual — and how all three of these dimensions must be taken into consideration by data-collection systems in order to properly obtain user consent. The authors apply this framework to evaluate the consent practices of Google, Amazon, Facebook, Apple, and Microsoft (GAFAM), finding that they do not meet its standards.
  • Kaissis, G. A., et al. (2020). Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence, 2(6), 305–311. https://doi.org/10.1038/s42256-020-0186-1
    • This article explores the relationship between AI and medical imaging, the potential for algorithm training in this field, and the obstacles of accessibility and patient privacy. The authors advocate for secure and privacy-preserving AI as a way to balance the protection of patient privacy with the revolutionizing possibilities of AI in medical imaging and clinical routine. 
  • Kim, N. S. (2019).* Consentability: Consent and its limits. Cambridge University Press.
    • This book analyzes the meaning of consent, introduces a consentability framework, and suggests ways to improve the conditions of consent and reduce opportunism. The author considers activities in three different categories. First, self-directed activities; second, activities that have to do with a persons’ bodily integrity; and third, novel procedures or cutting-edge experiments and whether or not people should be allowed to consent to something that has never been done before, where there is little information about potential consequences.
  • Kotsenas, A. L., et al. (2021). Rethinking patient consent in the era of artificial intelligence and big data. Journal of the American College of Radiology, 18(1), 180–184. https://doi.org/10.1016/j.jacr.2020.09.022
    • This paper discusses concerns about data collection in healthcare settings and how current data consent forms do not make it clear that patient data may be used for secondary purposes, such as developing AI systems. This is particularly concerning as some of this data, like radiologic images, are difficult to fully anonymize and so can be used to identify patients. The authors propose a new method of patient data collection which includes educating patients on the full implications of their consent and providing different tiers of data collection that patients can consent to.
  • Miller, F. G., & Wertheimer, A. (2010).* The ethics of consent: Theory and practice. Oxford University Press.
    • This book assembles the contributions of a distinguished group of scholars concerning the ethics of consent in theory and practice. Part One addresses theoretical perspectives on the nature and moral force of consent, and its relationship to key ethical concepts such as autonomy and paternalism. Part Two examines consent in a broad range of contexts, including sexual relations, contracts, selling organs, political legitimacy, medicine, and research.
  • Müller, A., & Schaber, P. (2018).* The Routledge Handbook of the Ethics of Consent. Routledge.
    • This handbook is divided into five main parts: general questions, normative ethics, legal theory, medical ethics, and political philosophy. The authors examine debates and problems in these fields including: the nature and normative importance of consent, paternalism, exploitation and coercion, privacy, sexual consent, consent and criminal law, informed consent, organ donation, clinical research, and consent theory of political obligation and authority.
  • Norval, C., & Henderson, T. (2019). Automating dynamic consent decisions for the processing of social media data in health research. Journal of Empirical Research on Human Research Ethics. doi.org/10.1177/1556264619883715
    • This article presents an exploratory user study (n = 67) in which the authors find that they can predict the appropriate flow of health-related social media data with reasonable accuracy, while minimizing undesired data leaks. The authors then deconstruct the findings of this study, identifying and discussing a number of real-world implications if such a technique were put into practice.
  • O’Connor, Y., et al. (2021). Implementing electric consent aimed at people living with dementia and their caregivers: Did we forget those who forget? Proceedings of the 54th Hawaii International Conference on System Sciences, 3893-3902. http://hdl.handle.net/10125/71088
    • The authors question the universal applicability of informed electronic consent (eConsent) by investigating the use of eConsent in the context of people living with dementia and their caregivers. Combining both political and technological perspectives, this study conducts a market review of mobile health applications. The authors note that the requirements for eConsent do not properly determine the capacity of the individual to understand the information presented to them and give informed consent, and they argue that these issues are exacerbated for people with dementia. Overall, their critiques of eConsent in the context of people living with dementia can be applied to eConsent as a whole, and serve as a starting point for its future improvement. 
  • Pagallo, U. (2020). On the principle of privacy by design and its limits: Technology, ethics, and the rule of law. In S. Chiodo & V. Schiaffonati (Eds.), Italian Philosophy of Technology: Socio-Cultural, Legal, Scientific and Aesthetic Perspectives on Technology (pp. 111-127). Springer. https://doi.org/10.1007/978-3-030-54522-2_8
    • This chapter critically examines the principle of privacy by design. The author looks at technological limits as well as ethical and legal considerations of the current debate surrounding privacy by design. In locating three distinct limits, the author proposes a more ethically sound version of privacy by design.
  • Papadimitriou, S., et al. (2019). Smart educational games and consent under the scope of General Data Protection Regulation. In 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA) (pp. 1-8). Institute of Electrical and Electronics Engineers.
    • This article focuses on General Data Protection Regulation’s principle of personal data processing consent and seeks balance between gaming amusement, educational benefits, and regulatory compliance. The authors combine legal theory and computer science in order to propose applicable solutions with the form of guidelines towards gaming stakeholders in general as well as educational gaming stakeholders in specific.
  • Pasquale, F. (2018).* The black box society. Harvard University Press.
    • This book exposes how powerful interests abuse secrecy for profit and explains ways to rein them in. The author argues that demanding transparency is only the first step toward individuals having control of how big date affects their lives, and that an intelligible society would assure that key decisions of its most important firms are fair, non-discriminatory, and open to criticism. 
  • Price, W. N., & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25(1), 37–43. https://doi.org/10.1038/s41591-018-0272-7
    • This paper discusses the legal and ethical issues of big data in the medical field, specifically in terms of patient privacy. The authors outline the limits of current policy such as the US federal Health Insurance Portability and Accountability Act (HIPAA) and its Privacy rule. The authors argue that going forward, a balance must be struck to avoid excessive under- or over-protection of privacy. 
  • Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5-14. https://doi.org/10.1007/s10676-017-9430-8
    • This paper proposes a framework, or an algorithmic social contract, with which AI can be regulated. Deemed “society-in-the-loop,” the author combines the concepts of human-in-the-loop and social contract theory to envision a governing paradigm between humans and algorithmic systems. 
  • Rule, J. B. (2007).* Privacy in peril: How we are sacrificing a fundamental right in exchange for security and convenience. Oxford University Press.
    • This book examines how personal data made available to virtually any organization for virtually any purpose is apt to surface elsewhere, applied to utterly different purposes. The author argues that as long as individuals willingly accept the pursuit of profit or cutting government costs as sufficient reason for intensified scrutiny over their lives, then privacy will remain endangered.
  • Sawchuk, K. (2019). Private parts: Aging, AI, and the ethics of consent in subscription-based economies. Innovation in Aging, 3(1). https://doi.org/10.1093/geroni/igz038.082
    • This paper explores Artificial Intelligence (AI) as a technological design offered to assist elder-care based on tracking individual behavior amassed in databases that are given predictive value through algorithm-identified normative patterns. Drawing examples from ethnographic research conducted at the 2019 Consumer Electronics Show, the author focuses on the ethical dilemmas of privacy, security, consent, and identity in home surveillance systems and financialization of personal data in AI subscription-based services. The author argues that the subscription-based economy exploits older individuals by sharing their lifestyle profiles, health information, economic status, and consumer preferences within powerful corporate networks such as Google and Amazon.
  • Thorstensen, E. (2018, July). Privacy and future consent in smart homes as assisted living technologies. In International Conference on Human Aspects of IT for the Aged Population (pp. 415-433). Springer.
    • With the advent of the General Data Protection Regulative (GDPR), there are clear regulations demanding consent to automated decision-making regarding health. This article opens up some of the possible dilemmas in the intersection between the smart home ambition and the GDPR with specific attention to the possible trade-offs between privacy and well-being through a future case, to the learning goals in a future smart home with health detection systems and presents different approaches to advance consent.
  • Varon, J., & Peña, P. (2021). Artificial intelligence and consent: A feminist anti-colonial critique. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1602
    • This paper uses feminist and anti-colonial theories to discuss the power dynamics involved in data consent, and how these power dynamics have allowed for the perpetuation of colonialism in technology. The authors critique the ‘all or nothing’ form of consent that is prevalent in current data collection systems and they propose collective action not only to change the power dynamics that currently shape these technologies, but to question whether they should be built in the first place.
  • Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136.
    • Informed by a liberal rights-based approach, and perspectives from science and technology studies, the author argues that big data and predictive analytics should be approached as a mode of ‘design-based regulation’. While data are often collected surreptitiously, and automated decisions often happen without individuals’ knowledge, ‘notice and consent’ models remain unsatisfactory solutions to these challenges. In particular, the author argues that they cannot offer individuals meaningful consent to data sharing or data processing, that the volume of detail needed to meet the threshold of consent would be overwhelming to individuals, and that data practices themselves are “volatile and indeterminate”. The author suggests that we must move beyond liberal notions of consent to recognize big data’s regulatory power – a ‘soft’ form of control that erodes our capacity for democratic self-government.
  • Ytre-Arne, B., & Das, R. (2019). An agenda in the interest of audiences: Facing the challenges of intrusive media technologies. Television & New Media, 20(2), 184-198.
    • This article formulates a five-point agenda for audience research, drawing on implications arising out of a systematic foresight analysis exercise on the field of audience research, conducted between 2014 and 2017, by the research network Consortium on Emerging Directions in Audience Research (CEDAR). The agenda includes substantial and intellectual priorities concerning intrusive technologies, critical data literacies, labour, co-option, and resistance, and argues for the need for research on these matters, in the interest of audiences.

Chapter 20. Is Human Judgment Necessary? Artificial Intelligence, Algorithmic Governance, and the Law (Norman W. Spaulding)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.25

  • Araujo, T., et al. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611-623.
    • This paper presents an analysis of a nation-wide survey evaluating perceptions of AI and human judgement across a variety of scenarios. The authors find that, while there is a broad consensus of concern about the risks of AI decision makers, people’s perception of the fairness and usefulness of algorithmic judgement are not general and, in fact, vary by scenario. For some scenarios, respondents even prefer automated decisions to human judgement.
  • Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1-13. https://doi.org/10.1080/1369118X.2016.1216147
    • This article aims to discuss algorithms from a social science perspective. First, the author analyzes the issue of social power as it relates to algorithms. Second, they focus on how the notion of an algorithm is conceived in order to enable researchers to better understand how algorithms play a role in social ordering processes. 
  • Binns, R. (2020). Human judgment in algorithmic loops: Individual justice and automated decision-making. Regulation & Governance, 16(1). https://doi.org/10.1111/rego.12358      
    • This article argues that individual justice can only be meaningfully served through human judgement rather than artificial intelligence. Binns contends that individual justice should be distinguished from other forms of justice. Additionally, the author points to two main challenges that result from algorithmic judgements: first, that individual justice will often conflict with algorithm-driven consistency and fairness, and second, that algorithmic systems are incapable of respecting individual justice. 
  • Danaher, J. (2019). The rise of the robots and the crisis of moral patiency. AI & Society, 34(1), 129–136. https://doi.org/10.1007/s00146-017-0773-9    
    • This paper asserts that the rise of robots and artificial intelligence is likely to create a crisis of moral patiency, making humans less willing and able to act in the world as moral agents. The consequences of this have dangerous implications for politics and the social world.  
  • Diakopoulos, N. (2015). Algorithmic accountability. Digital Journalism, 3(3), 398-415. https://doi.org/10.1080/21670811.2014.976411      
    • This article examines algorithmic accountability reporting as a mechanism that has the potential to amplify power structures and biases that computational artifacts perpetuate in society. It uses five cases of algorithmic accountability performance using journalistic reverse engineering strategies to provide insight into method and application in the field of journalism. It also assesses transparency models on a broader scale.
  • Epstein, R., et al. (Eds.). (2008).* Parsing the Turing test: Philosophical and methodological issues in the quest for the thinking computer. Springer.
    • This edited volume features psychologists, computer scientists, philosophers, and programmers who examine the philosophical and methodological issues surrounding the search for true artificial intelligence. Questions authors explore include “Will computers and robots ever think and communicate the way humans do?” and “When a computer crosses the threshold into self-consciousness, will it immediately jump into the Internet and create a World Mind?”
  • Finn, E. (2017).* What algorithms want: Imagination in the age of computing. The MIT Press.
    • This book explores how the algorithm has roots in mathematical logic, cybernetics, philosophy, and magical thinking. Finn argues that algorithms use concepts sourced from idealized computation and applies it to a non-ideal reality, yielding unpredictable responses. To address the gap between abstraction and reality, Finn advocates for the creation of a model of “algorithmic reading” and scholarship which considers process.
  • Grgic-Hlaca, N., et al. (2018). Human perceptions of fairness in algorithmic decision making. In P.-A. Champion, F. Gandon, & L. Medini (Eds.), Proceedings of the 2018 World Wide Web Conference (pp. 903-912). International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/3178876.3186138
    • This research article examines AI and the concept of distributive fairness (the fairness of decision outcomes). The authors propose methods for procedural fairness that consider the input features used in the decision process and evaluate the moral judgments of humans regarding the use of these features. The authors use two real-world datasets of human survey responses from the Amazon Mechanical Turk (AMT) platform, and submodular mechanisms are used to optimize the trade-off between procedural fairness and prediction accuracy. The authors find that procedural fairness may be achieved with little cost to outcome fairness through the use of these technologies. 
  • Gunkel, D. (2012).* The machine question: Critical perspectives on AI, robots, and ethics. The MIT Press.
    • The author examines the “machine question” in moral philosophy, which aims to determine whether, and to what degree, human-made intelligent and autonomous machines can have moral responsibilities and moral consideration. Traditional philosophical notions are challenged by the machine question, as they posit technology as a tool for human uses rather than moral agents.
  • Gunkel, D. (2014). A vindication of the rights of machines. Philosophy & Technology, 27(1), 113–132. https://doi.org/10.1007/s13347-013-0121-z      
    • This article argues that artificial intelligences cannot be excluded from moral consideration, which calls not only for an extension of rights to machines, but an examination into the configuration of moral standing.
  • Haraway, D. J. (1991).* A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In D. J. Haraway (Ed.), Simians, cyborgs and women: The reinvention of nature. Routledge.      
    • This essay gives a post-structuralist account of the term “cyborg” as a concept that resists strict categorization, not simply a distinction of “human” from “machine” or “human” from “animal,” but a combination of these concepts.
  • Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. https://doi.org/10.1016/j.bushor.2018.03.007    
    • This article argues that rather than having the goal of replacing humans with AI, developers of the technology should work toward complementing the independent strengths of both humans and robots. The author holds that the holistic and intuitive nature of humans in organizational decision-making should be maintained, while computational processing capacities are expanded with the use of AI. 
  • Kitchin, R. (2017).* Thinking critically about and researching algorithms. Information, Communication and Society 20(14). https://doi.org/10.1080/1369118X.2016.1154087    
    • This paper synthesizes current literature on algorithms and develops new arguments about their study. This includes the need to focus critical attention on algorithms in light of their increased role in society, how to best understand algorithms conceptually, challenges for researching algorithms, and the differing ways algorithms can be empirically studied.
  • Kraemer, F., et al. (2010).* Is there an ethics of algorithms? Ethics and Information Technology, 13(3), 251-260. https://doi.org/10.1007/s10676-010-9233-7
    • The authors argue that algorithms can be value-laden, meaning that designers may have justified reasons for creating differential algorithms. To illustrate this claim, the authors use the example of algorithms used in medical analysis, which can be designed differently depending on the priorities of the software designers, such as avoiding false negatives. They go on to contribute guidelines for ethical issues in algorithm design.
  • Jakesch, M., et al. (2019). AI-mediated communication: How the perception that profile text was written by AI affects trustworthiness. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-13).
    • This work considers how user’s trust of a piece of online text content is affected by their perception of whether the text was generated by an AI. In an experiment where participants evaluated AirBnB host profiles, the authors found that users distrust the AI-written profiles only when they are shown alongside genuine (human-written) profiles. They call this the replicant effect and explore its further implications in AI-mediated communication.
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1). https://doi.org/10.1177/2053951718756684      
    • This article outlines an online experiment exploring perceptions of algorithmic management using managerial decisions, which required mechanical or human skills to measure perceived fairness, trust, and emotional response. The author finds that with mechanical tasks, algorithmic and human-made decisions were perceived as equally fair and trustworthy; however, human managers’ fairness and trustworthiness were attributed to the manager’s authority, whereas algorithms’ fairness and trustworthiness were attributed to their perceived efficiency and objectivity. With human tasks, algorithmic decisions were perceived as less fair and trustworthy and evoked more negative emotional responses. These findings suggest that task characteristics matter in people’s experiences with these technologies.
  • Lepri, B., et al. (2017). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611–627. https://doi.org/10.1007/s13347-017-0279-x  
    • This article argues that while there are some potential benefits to algorithmic decision-making, the potential of increased discrimination and opacity raises concerns, especially when addressing complex social problems. The authors propose various technical solutions designed to improve fairness and transparency in algorithmic decision-making, highlighting the Open Algorithms (OPAL) project as an example of advanced AI supporting the advancement of democracy and development.
  • Lumbreras, S. (2017). The limits of machine ethics. Religions, 8(5). https://doi.org/10.3390/rel8050100        
    • The author provides a framework to classify the methodology employed in the field of machine ethics. The limits of machine ethics are discussed in light of design techniques that only express values imported by the programmer.
  • Lustig, C., & Nardi, B. (2015). Algorithmic authority: The case of Bitcoin. In 2015 48th Hawaii International Conference on System Sciences (pp. 743-752). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/hicss.2015.95
    • The authors propose a new concept for understanding the role of algorithms in daily life: algorithmic authority. Algorithmic authority is the power of algorithms to direct human action and to impact which information is considered true. The authors apply their theory to the culture of Bitcoin users, assessing their trust in the algorithm. They found that Bitcoin users prefer algorithmic authority to conventional institutions, which they see as untrustworthy, acknowledging the need for algorithmic authority to be mediated by human judgment. 
  • Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology, 18(4), 243-256. https://doi.org/10.1007/s10676-015-9367-8    
    • The author discusses the overlap between robot ethics (how humans should design and treat robots) and machine morality (how robots can have morality), arguing that robots can be designed with human moral characteristics. They suggest that morally competent robots can effectively contribute to society in the same way humans can.
  • Mittelstadt, B. D., et al. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679
    • Gaps between design and actual functioning of algorithms can have serious consequences for individuals and societies. This article provides an outline on the debate on the ethics of algorithms and evaluates the current literature to identify topics that need further consideration.
  • Moor, J. H. (Ed.). (2003).* The Turing test: The elusive standard of artificial intelligence. Springer.    
    • This book discusses the influence of Alan Turing, including “Computing Machinery and Intelligence,” his pre-eminent article on the philosophy of artificial intelligence, which included a presentation of his famous imitation game. Turing predicted that by the year 2000, the average interrogator would not have a greater than 70% chance of making the correct identification in the imitation game. Using the results of the Loebner 2000 contest, as well as breakthroughs in the field of AI, the author argues that although there has been much progress, Turing’s prediction has not been borne out.
  • Newell, S., & Marabelli, M. (2015). Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of ‘datification.’ The Journal of Strategic Information Systems, 24(1), 3–14. https://doi.org/10.1016/j.jsis.2015.02.001  
    • This article draws attention to the tension between businesses—which increasingly profile customers and personalize products and services—and individuals, who are often unaware of how the data they produce are being used, by whom they are being used, and with what consequences. The authors highlight how issues associated with privacy, control, and dependence arise and suggest that the social and ethical concerns related to the strategic exploitation of digitized technologies by businesses should be more thoughtfully discussed. 
  • Raghu, M., et al. (2019). The algorithmic automation problem: Prediction, triage, and human effort. arXiv:1903.12220
    • This article argues that automation goes beyond comparison of human and algorithmic performance of tasks; it also involves the decision of which instances of tasks should be assigned to an algorithm in the first place. The authors develop a general framework as an optimization problem to show how basic heuristics can lead to performance gains while also showing how effective automation depends on estimating both algorithmic and human error on a case-by-case basis. 
  • Selbst, A. D., & Barocas, S. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085-1139. https://ir.lawnet.fordham.edu/flr/vol87/iss3/11/
    • This article seeks to distinguish machine learning from other forms of decision-making. The authors argue that machine learning models can be both inscrutable and non-intuitive and that these are related, but distinct, properties. Addressing non intuitiveness requires providing a satisfying explanation for why the rules are what they are. This article argues for other mechanisms for normative evaluation or machine learning. The authors find that to understand the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.
  • Shank, D. B., et al. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Information, Communication & Society, 22(5), 648-663.
    • This work explores how people differently attribute moral wrongdoing to human and algorithmic decision makers. The authors conduct a survey where participants respond to scenario descriptions where a human, AI, or joint human-AI decision maker makes an error. Their analysis focuses on how respondents’ perception of wrongdoing varies in each case. While they find little variation between cases in the perception of AI wrongdoing, human decision makers were judged less harshly when they were working jointly with AI systems (either taking recommendations from an AI or overseeing an AI decision maker). This suggests the dangerous possibility of AI scapegoating. The use of AI tools in human decision-making may lessen human accountability.
  • Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551.
    • This paper explores the relationship between causability, explainability, and human trust in algorithmic decision makers. A primary criticism of AI decision making is that it is opaque and so difficult to understand, trust and monitor. The field of explainable AI aims to solve this issue by making the chain of reasoning leading up to a given decision more visible. The authors examine user’s experiences of explainability and causability through a survey and find that, while explainability is closely linked to trust, causability is connected to user’s emotional confidence.
  • Trausan-Matu, S. (2017). Is it possible to grow an I–Thou relation with an artificial agent? A dialogistic perspective. AI & Society, 34(1), 9-17. https://doi.org/10.1007/s00146-017-0696-5      
    • This paper aims to analyze the question of whether it is possible to develop an I-Thou relationship with an artificial conversational agent, discussing possibilities and limitations. Novel perspectives from various disciplines are discussed.
  • Van de Voort, M., et al. (2015).* Refining the ethics of computer-made decisions: A classification of moral mediation by ubiquitous machines. Ethics Information Technology, 17(1), 41–56. https://doi.org/10.1007/s10676-015-9360-2                              
    • This article investigates computer-made ethical decisions and argues that machines have morality not only when they mediate the actions of humans, but also work to mediate morality itself via decisions within their relationships to human actors. The authors accordingly define four types of moral relations.
  • van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics, 25(3), 719–735. https://doi.org/10.1007/s11948-018-0030-8      
    • This article offers a deeper look into the reasons given for developing artificial moral agents (AMAs), arguing the machine ethicists must provide good reasons to build such entities. Until such work is complete, development of AMAs should not continue.
  • Wallach, W., & Allen, C. (2009).* Moral machines: Teaching robots right from wrong. Oxford University Press.            
    • The authors argue that machines do not use explicit moral reasoning in their decision-making, and thus there is a need to create embedded morality as these machines continue to make important decisions. This new field of machine morality or machine ethics will be crucial for designers.
  • Winograd, T. (1990).* Thinking machines: Can there be? Are we? In D. Partridge & Y. Wilks (Eds.), The foundations of artificial intelligence: A sourcebook (pp. 167-189). Cambridge University Press.          
    • The author explores a view attributed to futurologists, who believe that a new species of thinking machines, machina sapiens, will emerge and become dominant by applying their extreme intelligence to human problems. A critique of this view is that computers cannot possibly accurately replicate human intelligence because their cold logical programming deprives them of vital features such as creativity, judgement, and genuine intentionality. The author argues that although it is true that artificial intelligence has yet to achieve things such as creativity and judgement, it has far more basic shortcomings in this vein, as current machines are unable to display common sense, or basic conversational language skills.   
  • Zarsky, T. (2015). The trouble with algorithmic decisions. Science, Technology, & Human Values, 41(1), 118–132. https://doi.org/10.1177/0162243915605575      
    • This article seeks to outline policy making concerns that have arisen due to the rise in the use of algorithmic decision-making tools. The author provides policy makers and scholars with a comprehensive framework for approaching these issues, calling for the usage of an analytical framework that reduces the discussion to two dimensions, (1) the specific and novel problems the process assumedly generates and (2) the specific attributes which exacerbate them. The problems can be reduced to two broad categories: efficiency and fairness-based concerns. The author contends that such problems are usually linked to two salient attributes the algorithmic processes feature—its opaque and automated nature.
  • Ziewitz, M. (2016). Governing algorithms: Myth, mess, and methods. Science, Technology, & Human Values, 41(1), 3–16. https://doi.org/10.1177%2F0162243915608948                  
    • This article aims to provide critical background into the issue of algorithms being viewed as both extremely powerful and difficult to understand. It considers algorithms not only as computational, but also sensitive, and challenges assumptions about agency, transparency, and normativity.

Chapter 21. Sexuality (John Danaher)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.26

  • Bloom, P. (2020). Identity, institutions and governance in an AI world: Transhuman relations. Springer.
    • This book explores transhuman relations, and the potential for radical change to identity, institutions, and governance created by interactions with AI. The author proposes that the future of transhuman relations will emphasize infusing AI programming with values of social justice. They theorize that transhuman relations will be marked with a concern for protecting the rights and views of all forms of “consciousness,” thus creating the structures and practices necessary for encouraging a culture of “mutual intelligent design.”
  • Carvalho Nascimento, E., et al. (2018). The “use” of sex robots: A bioethical issue. Asian Bioethics Review, 10(3), 231–240. https://doi.org/10.1007/s41649-018-0061-0
    • This article presents the current state of the use of female sex robots, reviewing the emerging themes in bioethics discourse on the topic, including sexuality and its deviations, the objectification of women, the relational problems of contemporary life, loneliness, and the reproductive future of the human species. The authors also discuss problems that arise from the use of sex robots and how bioethics could serve as a medium for thinking about and resolving these challenges.
  • Carvalho Nascimento, E., et al. (2021). In love with machines: The bioethical debate about sexual automation. RBD. Revista de Bioética y Derecho, 181-202.
    • This article explores the use of sex robots from a bioethical perspective and presents questions raised by those for and against its use. The authors argue that the use of sex robots mechanize intimate relationships and reveals issues regarding gender, inequality, and health. Furthermore, sex robots become problematic for bioethical analysis when the objectification of female bodies encourages disrespectful rhetoric through the reinforcement of different types of prejudice and violence towards women. Ultimately, the question of whether robot sex should be permissible remains inconclusive in the discussions from the sources presented.
  • Danaher, J., & McArthur, N. (Eds.). (2017).* Robot sex: Social and ethical implications. MIT Press.
    • This edited volume gathers perspectives from ethics and sociology on the emerging issue of sex with robots. Contributions to the volume define what robot sex is, explore ways in which it can be defended or challenged on ethical grounds, take the perspective of the robot in considering the matter, and reflect on the possibility of robot love. Finally, some contributors articulate visions for the future of robot sex, underlining the importance of evaluating love and intimacy in robot encounters (as opposed to just sex) and emphasizing the impact robot sex will have on society. 
  • Danaher, J., et al. (2018).* The quantified relationship. The American Journal of Bioethics, 18(2), 3–19.
    • This article provides a detailed ethical analysis of the Quantified Relationship (QR). The Quantified Self movement, which pursues self-improvement through the tracking and gamification of personal data; the QR applies this to interpersonal, romantic relationships. This article identifies eight core objections to the QR, and counters them by arguing that there are ways in which tracking technologies can be used to support and facilitate good relationships.
  • de Fren, A. (2009). Technofetishism and the uncanny desires of A.S.F.R. (Alt Sex Fetish Robots). Science Fiction Studies, 36(3), 404–440.
    • This article presents a feminist, art-historical analysis of virtual communities that fetishize artificial women. Central to this fetish is the pleasure of ‘hacking’ the system or denaturalizing common understandings of subjecthood and desire. By drawing analogies between the uncanny artificial bodies at the heart of “alt sex fetish robots,” fantasies, and various historical and artistic antecedents, the author contributes to the critical understanding of mechanical bodies as objects of desire.
  • Devlin, K. (2018).* Turned on: Science, sex and robots. Bloomsbury Publishing.
    • This popular non-fiction book traces the emerging technology of sex robots from robots in Greek myth and the fantastical automata of the Middle Ages through to the sentient machines of the future that inhabit the prominent AI debate. The author compares the ‘modern’ robot to the robot servants in twentieth-century science fiction and offers a historical perspective on the psychological effects of the technology as well as the issues it raises around gender politics, diversity, surveillance, and violence.
  • Döring, N., et al. (2020). Design, use, and effects of sex dolls and sex robots: Scoping review. Journal of Medical Internet Research, 22(7), e18551.
    • This literature review aims at investigating the uses of human-like, full-body sex dolls and robots by considering the risks as well as, opportunities for sexual and social well-being. The majority of currently available publications are theoretical papers and no observational or experimental research exists that uses actual sex dolls or sex robots as stimulus material. 
  • Draude, C. (2011). Intermediaries: Reflections on virtual humans, gender, and the uncanny valley. AI & Society, 26, 319–327.
    • This article provides an analysis of the uncanny valley effect from a cultural and gender studies perspective. The uncanny valley effect describes the eeriness and lack of believability of anthropomorphic artefacts that resemble the ‘real’ thing too strongly. The author offers a gender-critical reading of computer theory by analyzing a classic story of user and artifact (E.T.A. Hoffman’s narration of Olimpia), ultimately arguing for more diverse artefact production.
  • Evans, D. (2010). Wanting the impossible: The dilemma at the heart of intimate human-robot relationships. In Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues (pp. 75–88). John Benjamins Publishing.
    • This chapter  makes a philosophical case against the claim that romantic relationships with robots will be more satisfying because robots can be made to conform to the human’s wishes. The author’s dismissal of this thesis does not rest on any technical limitation in robot building but is instead rooted in a thought experiment comparing two different kinds of partner robots: one capable of rejecting its owner and one which is not.
  • Fosch-Villaronga, E., & Poulsen, A. (2020). Sex care robots. Paladyn, Journal of Behavioral Robotics, 11(1), 1-18.
    • This paper explores the potential uses of sexual robot technologies towards the improvement of sexual satisfaction among elderly and disabled people. The authors identify the potential need to incorporate sex within the concept of care and propose that sex robots can serve to enhance this concept of care in marginalized communities. By investigating the use of robot technology to aid in sexual gratification, the authors hope to inform the policy debate around the regulation of robots and set the scene for further research.
  • Franceschi, V. (2012). “Are you alive?” Issues in self-awareness and personhood of organic artificial intelligence. Polemos (Roma), 6(2), 225–247.
    • This journal article examines the social and legal position of some uses of artificial intelligence (AI), such as cyborgs, robots, and androids. The author argues that AI technologies might advance to the point of overcoming their programming by developing their self-awareness and personalities. The author points to the social and legal inequalities that could occur if these systems significantly shape human experience and choices.  
  • Frank, L., & Nyholm, S. (2017).* Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25, 305–323.
    • This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. The authors present and analyze reasons to answer “yes” or “no” to these questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics, the relationship between consent and free will, and the relationship between consent and consciousness.
  • Gersen, J. S. (2019). Sex machina: Intimacy and artificial intelligence. Columbia Law Review, 119(7), 1793–1810.
    • This paper emphasizes the legal implications flowing from the existence of sex robotsand it argues that lawmakers will have to acknowledge the rising importance of digisexuality, i.e., the robot-to-human relationship. The author explores the positive and negative societal consequences of sex robots and their machine learning systems, especially intimate human-to-human and robot-to-human relationships. The author considers the legal and ethical questions arising from the proliferation of sex robots.
  • Grunsven, J. V. (2022). Anticipating sex robots: A critique of the sociotechnical vanguard vision of sex robots as ‘good companions’. In Being and value in technology (pp. 63-91). Palgrave Macmillan.
    • The development of humanoid sex-robots that bear some physical resemblance to human beings as well as possess Artificially Intelligent (AI) functionalities enabling quasi-intelligence, suggests that such entities could be “good companions” that transform the romantic lives of human persons. Such robots could be specifically useful to human persons that have trouble forming traditional love relationships with other humans. However, the author argues that sex robots are better understood as distributed systems than beings with human agency. Furthermore, the tendency to frame sex-robots as (quasi)autonomous beings who may or may not serve as good companions has not itself been thematized and critically reflected upon.
  • Gutiu, S. (2016). The robotization of consent. In R. Calo, M. Froomkin, & I. Kerr (Eds.), Robot law (pp. 186–212). Edward Elgar Publishing.
    • This chapter explains how sex robots can impact existing gender inequalities and the understanding of consent in sexual interactions between humans. Sex robots are defined by the irrelevancy of consent, replicating existing gender imbalances in emulating and eroticizing female sexual slavery. The author discusses the documented harms of extreme pornography, the expected harms of sexbots, connecting these to the legal concepts of harm under Canadian and U.S. legal systems. 
  • Halberstam, J. (2008). Animating revolt/revolting animation: Penguin love, doll sex and the spectacle of the queer nonhuman. In M. Hird & N. Giffney (Eds.), Queering the non/human. Taylor & Francis.
    • This chapter applies a queer theory approach to sex robots, suggesting that new forms of animation – from transgenic mice to female cyborgs and Tamagotchi toys – productively shift the terms and the meaning of the artificial boundaries between humans, animals, machines, states of life and death, animation and reanimation, living, evolving, becoming, and transforming. The author discusses the interdependence of reproductive and non-reproductive communities. 
  • Hancock, E. (2020). Should society accept sex robots? Paladyn, Journal of Behavioral Robotics, 11(1), 428-442.
    • The Campaign Against Sex Robots (CASR) argues against the production of sex robots and promotes abolitionist narratives in favour of the criminalisation of sex robots due to their perceived harm to the livelihoods of sex workers. This article analyzes whether sex workers and sex robots can be analogously compared and whether they exhibit similar credibility and viability in the digitalised sex industry. In order to disentangle radical arguments surrounding sex robots and the contemporary sex industry, the author formulates solutions based on ethical and social contentions. 
  • Harper, C. A., & Lievesley, R. (2020). Sex doll ownership: An agenda for research. Current Psychiatry Reports, 22(10), 1-8.
    • This literature review aims to examine existing psychological, sexological, and legal literature in relation to sex doll ownership. A variety of opinions about the potential socio-legal positions on sex doll ownership are represented in the literature. However, there is a lack of empirical analysis on the psychological characteristics and behavioral implications of sex doll ownership which highlights the need for additional research into this field of work. 
  • Hauskeller, M. (2014). Sex and the posthuman condition. Palgrave McMillan. 
    • This book looks at how sexuality is framed in enhancement scenarios and how descriptions of the resulting posthuman future are informed by mythological, historical, and literary paradigms. The author examines the glorious sex life humans will allegedly enjoy due to greater control of our emotions, improved capacity for pleasure, and availability of sex robots.
  • Kaufman, E. (2020). Reprogramming consent: Implications of sexual relationships with artificially intelligent partners. Psychology and Sexuality, 11(4), 372–383.
    • This journal article focuses on discussions around sexual consent, and the potential implications of sexual norms and standards for AI technologies. The author bases their argument on data from “Club RealDoll,” and explores how AI systems have identified normative values in different users’ attitudes towards sexual consent. 
  • Keyes, O., et al. (2012). Truth from the machine: Artificial intelligence and the materialization of identity. Interdisciplinary Science Reviews, 46(1-2), 158-175.
    • This article examines the intersection of two criticisms of artificial intelligence (AI): first, that it will lead to identity-based discrimination and second, that it will disrupt the growth of scientific research. The authors use case studies to demonstrate that when AI is deployed in scientific research about identity and personality, it can naturalise and reinforce biases. The authors argue that the concerns about scientific knowledge and identity are related, as positioning AI as a source of truth and scientific knowledge can have the effect of lending public legitimacy to harmful ideas about identity.
  • Kikerpill, K. (2020). Choose your stars and studs: The rise of deepfake designer porn. Porn Studies, 7(4), 352-356.
    • Deepfakes using AI based audio and image-swapping technology have broadened the horizons of porn production. The emergence of deepfake designer pornography involves the likeness of individuals being taken from internet searches and used in the production of porn without their consent. This sub-strand of involuntary pornography has many legal and privacy ramifications with far-reaching social implications. The development of applicable privacy and data protection laws is a step towards deterring the use of deepfakes in the creation of involuntary pornography. However, the author argues it would be naive to assume such laws would prompt fundamental changes towards the use of deepfakes in cases of involuntary porn.
  • Kubes, T. (2019). New materialist perspectives on sex robots: A feminist dystopia/utopia? Social Sciences, 8(8), 224.
    • This article re-evaluates feminist critiques of sex robots from a new materialist perspective, suggesting that sex robots may not be an exponentiation of hegemonic masculinity to the extent that the technology can be queered. When the beaten tracks of pornographic mimicry are left behind, sex robots may in fact enable new liberated forms of sexual pleasure beyond fixed normalizations, thus contributing to a sex-positive utopian future.
  • Lee, J. (2017). Sex robots: The future of desire. Palgrave Macmillan.
    • This book thinks through the sex robot beyond the human/non-human binary, arguing that non-human sexuality has been at the heart of culture throughout history. Taking a philosophical approach to what the sex robot represents and signifies, the author discusses the roots, possibilities, and implications of the not-so-new desire for sex robots.
  • Levy, D. (2009).* Love and sex with robots: The evolution of human-robot relationships. Gerald Duckworth & Company.
    • This popular non-fiction book consists of two parts, one concerning love with robots and the other concerning sex with robots. Using a range of examples, the author argues that the ability to feel affection for animate creations is long underway, making physical intimacy a logical next step. Moving from love to sex rather than the other way, the author makes the case that even entities that were once deemed cold and mechanical can soon become the objects of real, human desire.
  • Levy, K. (2014).* Intimate surveillance. Idaho Law Review, 51(3), 679–693.
    • This article considers how new technical capabilities, social norms, and cultural framework are beginning to change the nature of intimate monitoring practices. Focused on practices occurring on an interpersonal level, i.e. in an intimate relationship with two partners, the author examines the relations between data collection, values, and privacy from dating and sex to fertility, fidelity, and finally, abuse. The author closes with reflections on the role of law and policy in the emerging domain of intimate (self)surveillance.
  • Lieberman, H. (2017).* Buzz: The stimulating history of the sex toy. Pegasus Books.
    • This popular non-fiction book focuses on the history of sex toys from the 1950s to the present, tracing how once taboo devices reached the cultural mainstream. This historical account moves from sex toys as symbols of female emancipation and tools in the fight against HIV/AIDS to consumerist marital aids and, finally, to mainstays in popular culture.
  • Lupton, D. (2014).* Quantified sex: A critical analysis of sexual and reproductive self-tracking using apps. Culture, Health & Sexuality, 17(4), 440–453.
    • This article presents a critical analysis of computer apps used to self-track features of users’ sexual and reproductive activities and functions. The analysis reveals that such apps represent sexuality and reproduction in certain defined and limited ways that work to perpetuate normative stereotypes and assumptions about women and men as sexual and reproductive subjects, and exposes issues concerning privacy, data security and the use of the data collected by these apps. The author suggests ways to ‘queer’ self-tracking technologies in response to these issues.
  • Ma, J., et al. (2022). Sex robots: Are we ready for them? An exploration of the psychological mechanisms underlying people’s receptiveness of sex robots. Journal of Business Ethics, 1-17.
    • This paper investigates public receptiveness towards the application of Artificial Intelligence (AI) and robotic technology for uses pertaining to human sexual gratification. The perceived substitutability of sex robots for human-to-human sexual interactions is minimized by broad anxiety concerning emerging technologies and religiosity. By examining the underlying psychology of the public’s receptiveness towards sex robots, the authors aim to raise the awareness of significant social and ethical implications should sex robots become widely accepted and adopted.
  • McArthur, N., & Twist, M. (2017).* The rise of digisexuality: Therapeutic challenges and possibilities. Sexual and Relationship Therapy, 32(3–4), 334–344.
    • This article argues that clinicians in the psychological setting should be prepared to work with ‘digisexuals’: people whose primary sexual identity comes through the use of radical new sexual technologies. Guidelines for helping individuals and relational systems make informed choices regarding participation in technology-based activities of any kind, let alone ones of a sexual nature, are few and far between. The authors articulate a framework for understanding the nature of digisexuality and how to approach it.
  • Mindell, D. (2015). Our robots, ourselves: Robotics and the myths of autonomy. Viking.
    • Departing from the future tense that is common in conversations about robots, this book investigates the most advanced robotics that currently exist. Deployed in high atmosphere, deep ocean, and outer space, these robotic applications show that the stark lines between human and not human or manual and automated, are not helpful. The author clarifies misconceptions about the autonomous robot to talk about the human presence at the center of the technological landscape.
  • Nørskov, M. (2016). Social robots: Boundaries, potential, challenges. Routledge
    • This book introduces cutting edge research on social robotics, referring to robots used for entertainment, partnership, caregiving, etc. The author critiques the development of these AI technologies based on the challenges they pose on society.  They argue that social robots will eventually become their own category of people, developing an equal mind to human beings in terms of cognitive behaviour and interaction.
  • Oleksy, T., & Wnuk, A. (2021). Do women perceive sex robots as threatening? The role of political views and presenting the robot as a female-vs male-friendly product. Computers in Human Behavior, 117, 106664.
    • The development of sex robots has provoked debates on the social consequences of their application. This experimental study examined whether sex robots are perceived by heterosexual women as a sexual threat and whether this potential threat could be decreased by depicting the robots as products suitable for women. The results showed that, presenting sex robots as a product suitable for women decreased a woman’s perceived sexual threat of the robot compared to when the robot was presented as a product designed for men. However, this effect was only seen among female participants that supported more liberal political views. Women that supported more conservative political views perceived the sex robots as a threat regardless of whether the robots were presented as products designed for women or men. This study highlights the importance of accounting for political views when examining the social perception of controversial technologies like sex robots.
  • Nyholm, S., & Frank, L. (2017). From sex robots to love robots: Is mutual love with a robot possible? In J. Danaher & N. McArthur (Eds.) Robot sex: Social and ethical implications. MIT Press.
    • This paper provides a framework for approaching the question of mutual love and outlines criteria regarding the characteristics that sex robots must exhibit in order to participate in relationships with humans that can be recognized as mutual love. Such criteria are related to “being a good match”, the idea of lovers as valuing each other in their distinctive particularity, and associations with commitment.
  • Robinson, S. (2020). Trust, transparency, and openness: How inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI). Technology in Society, 63. https://doi.org/10.1016/j.techsoc.2020.101421
    • This paper focuses on the openness of Nordic AI. The author argues that institutionalizing culture values within AI policies promotes greater trust in AI technologies and machines. Their analysis considers three different values present in Nordic culture: ethics, privacy, and autonomy.
  • Richardson, K. (2020). Sex robots: The end of love. Polity Press.
    • This book is an anthropological critique of sex robots, here taken up as points of insight into how women and girls are imagined and how porn, prostitution, and the sexual exploitation of children drive the desire for them. The author argues that sex robots are produced within a framework of ‘property relations,’ in which egocentric Man (and his disconnection from Woman) shapes the building of robots and AI. This makes sex robots a major threat to the possibility of love and connection.
  • van Oost, E. (2003). Materialized gender: How shavers configure the users’ femininity and masculinity. In N. Oudshoorn & T. Pinch (Eds.), How users matter: The co-construction of users and technologies. MIT Press.
    • This chapter is part of an edited volume that examines how users shape technology from design to implementation. The author uses the case study of shaving devices marketed to men or women to show design trajectories use of “gender scripts”: particular representations of the male and female consumer that become inscribed in the design of the artefacts. Their analysis suggests that technical competence is inscribed in artefacts marketed to men, while products targeting women inscribe disinterest in technology on their user.  
  • Verbeek, P.-P. (2005). Artifacts and attachment: A post-script philosophy of mediation. In H. Harbers (Ed.), Inside the politics of technology: Agency and normativity in the co-production of technology and society (pp. 125–146). Amsterdam University Press.
    • This chapter uses Bruno Latour’s theory of technological mediation to explain how technologies foster attachment on the part of their users. For attachment to occur, artefacts should be present in an engaging way, stimulating users to participate in their functioning. Attachment always involves the materiality of the artefact more than its functioning, meaning that users also develop a bond with the machinery and material operation of artefacts.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
    • This book defines ‘surveillance capitalism’ as a novel market form and a specific logic of capitalist accumulation. If industrial capitalism exploits nature, surveillance capitalism exploits human nature through the installation of a global architecture of computer mediation that the author calls “Big Other.” Through these architectures’ hidden mechanisms of extraction, commodification, and control, surveillance capitalism erodes the human potential for self-determination, threatening core values such as freedom, democracy, and privacy.

An asterisk (*) after a reference indicates that it was included among the Further Readings listed at the end of the Handbook chapter by its author.