Chapter 22. Perspectives on Ethics of AI: Computer Science (Benjamin Kuipers)
- Abebe, R., et al. (2020). Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 252-260). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372871
- This publication presents four roles that computing research can play in addressing social problems. These four roles are: (1) serving as a diagnostic, (2) helping formalize how social problems are defined, (3) understanding what is possible via technical tools, and (4) helping to illuminate long-standing social problems to the public. The framework describes the potential of computational research to affect positive social change and the limits regarding computational research’s ability to solve societal problems on its own.
- Ali, M., et al. (2019). Ad delivery algorithms: The hidden arbiters of political messaging. In L. Lewin-Eytan, D. Carmel, & E. Yom-Tov (Eds.), 14th ACM International Conference on Web Search and Data Mining (WSDM) (pp. 13-21). Association for Computing Machinery. https://doi.org/10.1145/3437963.3441801
- This study investigates the impact of Facebook’s ad delivery algorithms for political ads, specifically analyzing political polarization as one of its effects. Ali and colleagues demonstrate that the ad delivery algorithms inhibit campaigns from reaching diverse groups of voters. Finally, the investigation demonstrates that the current reform efforts aimed at improving the reach of campaigns to diverse groups and reduce polarization are not sufficient. Thus, the authors suggest requiring more public transparency for algorithms used to deliver political campaign ads.
- Awad, E., et al. (2018). The moral machine experiment. Nature, 563(7729), 59-64.
- This article aims to address concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide these machines. The authors utilize the Moral Machine, an online experimental platform, to gather data which is analyzed to come to a recommendation as to how machine decision making should be determined.
- Bonnefon, J. F., et al. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
- Utilitarian objectives for autonomous vehicles (where the vehicle sacrifices the passengers for the greater good) can inform choices for helping autonomous vehicles make ethical decisions. Drawing from six Amazon Mechanical Turk studies, this paper found that participants approved of utilitarian objectives but would prefer to ride and purchase vehicles that prioritize passenger safety above the safety of others.
- Cowgill, B., et al. (2020). Biased programmers? Or biased data? A field experiment in operationalizing AI ethics. In P. Biro & J. Hartline (Eds.), Proceedings of the 21st ACM Conference on Economics and Computation (pp. 679-681). Association for Computing Machinery. https://doi.org/10.1145/3391403.3399545
- This paper covers the findings from a large-scale experiment with 400 machine learning engineers to understand actions throughout the machine learning development pipeline that leads to biased predictors. The study found that the majority of biased predictions are functions of biased training data. The study further found that reminding engineers about the possibility of bias can be almost as effective as de-biasing algorithms.
- Flanagan, O. (2016). The geography of morals: Varieties of moral possibility. Oxford University Press.
- This book uses comprehensive dialogue between cultural and psychological anthropology, empirical moral psychology, and behavioral economics with the aim of presenting and exploring cross-cultural and world philosophy.
- Floridi, L., et al. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.
- This article introduces the main opportunities and risks of artificial intelligence for society and presents five ethical principles that serve as a basis for the development and adoption of such technologies. The authors also offer 20 concrete recommendations to assess, develop, incentivize, and support “good” artificial intelligence, which may be used by national and international policy makers or by stakeholders.
- Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and machines, 14(3), 349-379.
- This paper first presents a concept of agency of Artificial Agents, and then explores the subsequent concerns raised surrounding the morality and responsibility of said agents. The authors argue that there is substantial and important scope for the concept of an Artificial moral agent not necessarily exhibiting free will, mental states or responsibility.
- Gibbs, J. C. (2019). Moral development and reality: Beyond the theories of Kohlberg, Hoffman, and Haidt. Oxford University Press.
- This text presents and argues for a new view of lifespan socio-moral development based on an exploration of moral identity and other variables that account for prosocial behavior.
- Giroux, M., et al. (2022). Artificial intelligence and declined guilt: Retailing morality comparison between human and AI. Journal of Business Ethics, 1-15.
- The article demonstrates that the consumers’ moral concerns and behaviours are different when interacting with AI technologies and humans. The authors show that the moral intention (e.g. reporting an error or mistake) is less likely to happen for AI checkout and self-checkout machines than for human checkout. They argue that the decline in morality is primarily caused by less guilt toward new emerging technologies.
- Goodall, N. J. (2014). Machine ethics and automated vehicles. In Road Vehicle Automation (pp. 93-102). Springer.
- This chapter introduces the concept of moral behaviour for an automated vehicle. The author claims that automated vehicles must continuously decide how to act reasonably even with failure-free hardware and perfect sensing. The author then argues that there needs to be research in this area that responds to anticipated critiques and discusses relevant applications from machine ethics and moral modelling research.
- Gulati, S., et al. (2019). Design, development and evaluation of a human-computer trust scale. Behaviour & Information Technology, 38(10), 1004-1015.
- This paper argues that as more tasks are delegated to intelligent systems and user interactions with these systems become increasingly complex, there must be a metric by which to quantify the amount of trust that a user is willing to place on such systems. The authors then present their own multi-dimensional scale to assess user trust in HCI.
- Green, B., & Hu, L. (2018). The myth in the methodology: Towards a recontextualization of fairness in machine learning. In Machine Learning: The Debates Workshop at the 35th International Conference on Machine Learning (ICML). https://www.benzevgreen.com/wp-content/uploads/2019/02/18-icmldebates.pdf
- Many definitions of fairness in machine learning technologies are statistical and do not incorporate critical social and normative analyses. This work provides arguments for why these definitions fail to capture important fairness concerns that are situated in social, political, and moral debates. Finally, the paper argues that without change in how machine learning researchers work on fairness, there will be little impact on eventual justice.
- Greene, J. D. (2013).* Moral tribes: Emotion, reason, and the gap between us and them. Penguin.
- This book explores how our evolutionary nature that dictates a select group of others (Us) and seeks to fight off everyone else (Them) can coexist with our modern conditions of shared space that result in the moral lines that divide us becoming more salient and more puzzling.
- Haidt, J. (2012).* The righteous mind: Why good people are divided by politics and religion. Vintage.
- In this text, the author draws on research on moral psychology to argue that moral judgments arise not from reason but from gut feelings. Thus, given that different groups have different intuitions about right and wrong, this creates polarization within a population.
- Holstein, K., et al. (2019). Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI conference on human factors in computing systems (pp. 1-16).
- This article explores the systematically conducted interviews and surveys of machine learning practitioners to study the challenges and needs that commercial product teams face when developing fair ML systems. The authors also highlight areas where the needs of the teams and existing fair ML solutions match and where there is a disconnect to propose directions for future research to meet the practitioners’ needs.
- Imana, B., et al. (2021). Auditing for discrimination in algorithms delivering job ads. In Proceedings of the Web Conference 2021 (pp. 3767-3778).
- The article provides a new method to identify discrimination in the delivery of job advertisements. It identifies the distinction between skew due to protected categories (e.g. gender and race) and skew due to people’s qualification differences in the targeted audience. The authors then develop a method to systematically distinguish these differences and confirm that some job ads are skewed by gender in ad delivery on Facebook.
- Kleinberg, J., & Raghavan, M. (2021). Algorithmic monoculture and social welfare. In Proceedings of the National Academy of Sciences, 118(22). https://doi.org/10.1073/pnas.201834011
- As algorithms are deployed more broadly, there are concerns about the decisions becoming homogeneous as multiple entities use the same algorithms. In this study, a theoretical analysis is provided to demonstrate why multiple entities using algorithms that are more accurate overall will lead to worse outcomes in society than not using the algorithm at all. The authors characterize this as an issue of algorithmic monoculture similar to issues of monoculture seen in agriculture.
- Liu, L. T., et al. (2018). Delayed impact of fair machine learning. Proceedings of Machine Learning Research, 80, 3150-3158. http://proceedings.mlr.press/v80/liu18c.html
- This study empirically and theoretically explores the impact on long-term fairness of optimizing machine learning models for static fairness measures. These analyses demonstrate that optimizing static fairness measures does not guarantee fairness over time. In fact, it can negatively impact the long-term fairness of the system whereas optimizing without these objectives would not have this effect.
- Martin, D. Jr., et al. (2020). Extending the machine learning abstraction boundary: A complex systems approach to incorporate societal context. arXiv:2006.09663
- This study examines three new tools for providing an in-depth understanding of the underlying societal context of developing and deploying machine learning algorithms. These three tools include: (1) a complex adaptive systems model that will aid both researchers and engineers with incorporating societal context in understanding machine learning fairness, (2) collaborative casual theory formation (CCTF) for developing a sociotechnical framework to combine different mental models and causal models for the problem at hand, and (3) community-based system dynamics to practice CCTF throughout the machine learning pipeline.
- Mitchell, S., et al. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8, 141-163.
- This article attempts to explore what fairness means in the context of decisions based on the predictions of statistical and machine learning models. Due to the rapid growth of this field, there are many inconsistent motivations, terminologies, and notations, which presents a challenge and the need for order. The authors explicate the choices and assumptions made to justify the use of prediction-based decision-making and show how they can raise fairness concerns. They also present a notationally consistent catalog of fairness definitions from literature.
- Othman, K. (2021). Public acceptance and perception of autonomous vehicles: A comprehensive review. AI and Ethics, 1(3), 355-387.
- The article points out that while both research and industry have put much effort into developing autonomous vehicles, laws and regularizations are not yet ready for adoption in real-world scenarios. The author reviews previous studies that focus on testing the public acceptance and perception of AI and provides an overview of the main trends in autonomous vehicles research to provide directions and recommendations for further development in safety, ethics, liability, and regulations.
- P. Lin, et al. (Eds.). (2012).* Robot ethics: The ethical and social implications of robotics. MIT Press.
- Starting with an overview of the issues and relevant ethical theories, the topics flow naturally from the possibility of programming robot ethics to the ethical use of military robots in war to legal and policy questions, including liability and privacy concerns. The book ends by examining the question of whether robots should be given moral consideration.
- Pinker, S. (2018).* Enlightenment now: The case for reason, science, humanism, and progress. Penguin.
- Citing data that tracks social progress, Pinker argues that reason and science can enhance human flourishing and reliance on these logical and scientific principles is required in order to continue the trajectory of increasing health, prosperity, safety, peace, knowledge, and happiness.
- Selbst, A. D., et al. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency (pp. 59-68).
- This article identifies the need for machine-learning based systems that can achieve social and legal outcomes such as fairness, justice, and due process when introduced into a social context, while highlighting the mismatch that exists between these concepts and the computer science concepts, such as abstraction and modular design, that are used to define fairness and produce fair machine learning algorithms. The authors contend that the technical interventions may be ineffective, inaccurate, or misguided when they are used to make decisions in the societal context, explaining why such pitfalls occur and how to avoid them.
- Sharkey, A., & Sharkey, N. (2012). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and information technology, 14(1), 27-40.
- This article addresses the growing proportion of the elderly and the growing ubiquity of robotics in society. It outlines developments in the areas of robotic applications to assist the elderly and their caretakers, from health and safety monitoring to providing companionship. They discuss six main ethical concerns that arise from the growing interaction between robots and the elderly and conclude by weighing the benefits of care against the ethical costs.
- Singer, P. (2011).* The expanding circle: Ethics, evolution, and moral progress. Princeton University Press.
- Drawing from the fields of philosophy and evolutionary psychology, this book argues that although altruism began as a genetically based drive to protect one’s kin and community members, it is not solely dictated by biology. Rather, altruism and by extension human ethics has developed as a result of our capacity for reasoning that leads to conscious ethical choices with an expanding circle of moral concern.
- Thieme, A., et al. (2020). Machine learning in mental health: A systematic review of the HCI literature to support the development of effective and implementable ML systems. ACM Transactions on Computer-Human Interaction (TOCHI), 27(5), 1-53.
- The article summarizes the current state-of-the-art AI work on mental health and provides concrete suggestions for the stronger integration of human-centred and multi-disciplinary approaches in research and development. The paper combines analysis from ML literature and HCI literature on psycho-socially based mental health conditions. The authors argue that there needs to be more consideration of the social and ethical implications in developing AI models for successful adoption in real-world mental health treatment.
- Tomasello, M. (2016).* A natural history of human morality. Harvard University Press.
- This book presents an account of the evolution of human moral psychology based on analysis and comparison of experimental data comparing great apes and human children. The author presents an argument for our development based on two key evolutionary steps: the move towards collaboration, and the emergence of distinct cultural groups.
- Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.
- This book aims to apply classical philosophical traditions of virtue ethics to challenges of a global technological society. The author argues that a moral framework based in virtue ethics represents the ideal guiding principles for contemporary society.
- van der Woerdt, S., & Haselager, P. (2016). Lack of effort or lack of ability? Robot failures and human perception of agency and responsibility. Benelux Conference on Artificial Intelligence, pp. 155-168.
- This study explores how considering an agent’s actions as related to either effort or ability can have important consequences for attributions of responsibility. The study concludes that a robot displaying lack of effort significantly increases human attributions of agency and –to some extent- moral responsibility to the robot.
- Vanderelst, D., & Winfield, A. (2018). The dark side of ethical robots. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 317-322).
- This paper argues that the recent focus on building ethical robots also inevitably enables the construction of unethical robots, as the cognitive machinery utilized to make an ethical robot can be easily corrupted. In the face of these risks, the authors advocate for a hesitancy in embedding ethical decision making in real-world safety-critical robots.
- Wallach, W., & Allen, C. (2008).* Moral machines: Teaching robots right from wrong. Oxford University Press.
- This book explores the problem of software governing autonomous systems being “ethically blind” in the sense that the decision‐making capabilities of such systems does not involve any explicit moral reasoning. The authors explore the necessity for robots to become capable of factoring ethical and moral considerations into their decision making as well as potential routes to achieve this.
- Wright, R. (2000).* Nonzero: The logic of human destiny. Pantheon.
- The author employs game theory and the logic of “zero-sum” and “non-zero-sum” games to argue against the conventional understanding that evolution and human history were aimless, presenting the view that evolution pushed humanity towards social and cultural complexity.
- Zhuang, S., & Hadfield-Menell, D. (2020). Consequences of misaligned AI. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. NeurIPS. https://proceedings.neurips.cc/paper/2020/hash/b607ba543ad05417b8507ee86c54fcb7-Abstract.html
- AI systems often operate on a partial understanding of the end user’s objectives, which are used to formalize a utility function and an optimization algorithm to learn the best behavior for achieving those objectives. This study analyzes the effect of incomplete information about the end user’s objectives on the overall utility. Finally, they theoretically demonstrate that allowing for interactivity between the user and the agent has greater benefits for designing the reward function.
- Zuboff, S. (2019).* The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs, New York.
- This book defines surveillance capitalism as the quest by powerful corporations to predict and control our behavior. It then argues that the total certainty for maximum profit promised by surveillance capitalism comes at the expense of democracy, freedom, and our human future.
Chapter 23. Social Failure Modes in Technology and the Ethics of AI: An Engineering Perspective (Jason Millar)
- Akrich, M. (1992). The De-scription of technical objects. In W. E. Bijker and J. Law (Eds.), Shaping technology/building society (1st ed., pp. 205-224). MIT Press.
- The author outlines how technical objects simultaneously embody and measure a set of relations between humans and non-humans and how they may generate both forms of knowledge and moral judgments.The author argues that technical objects have the ability to script or prescribe behavior.
- Bicchieri, C., et al. (2018). Social norms. In E. N. Zalta (Ed.) The Stanford encyclopedia of philosophy (Winter 2018 ed.). Stanford University. https://plato.stanford.edu/archives/win2018/entries/social-norms.
- This encyclopedia entry outlines a philosophical view on social norms, which are described as the endogenous product of individuals’ interactions. The authors highlight how central the concepts of beliefs, expectations, group knowledge, and common knowledge are to the idea of social norms. Specifically, focusing on expectations allows the differentiation between social norms, conventions, and descriptive norms, the lines between which are often blurred in other social science literature.
- Bicchieri, C. (2006). The grammar of society: The nature and dynamics of social norms. Cambridge University Press.
- The author examines social norms, such as fairness, cooperation, and reciprocity, in an effort to understand their nature and dynamics, the expectations that they generate, and how they evolve and change. The author provides a definition of social norms which in turn enables an investigation of what it means for a social norm to be designed into an artifact.
- Bijker, W. E., et al. (Eds.). (1987). The social construction of technological systems: New directions in the sociology and history of technology. MIT Press.
- The authors introduce a new method of inquiry—social construction of technology, or SCOT—that became a key part of the wider discipline of science and technology studies. Essays in this book tell stories about such varied technologies as thirteenth-century galleys, eighteenth-century cooking stoves, and twentieth-century missile systems. This book approaches the study of technology by giving equal weight to technical, social, economic, and political questions, and demonstrates the effects of the integration of empirics and theory.
- Brandtzaeg, P. B., & Følstad, A. (2018). Chatbots: Changing user needs and motivations. Interactions, 25(5), 38-43. https://interactions.acm.org/archive/view/september-october-2018/chatbots#comments
- This article discusses how a recent uptake in chatbots has revealed some of the current pitfalls of chatbot technology, and its needs going forward. The authors argue that chatbots are not designed well for their intended use cases and need improved designs that incorporate user needs and experiences. They also discuss a challenge for the human-computer interaction (HCI) community which is the unpredictable and highly variable inputs from users.
- Calo, R., et al. (Eds.).* (2016). Robot law. Edward Elgar Publishing.
- Robot Law collects papers by a diverse group of scholars focused on the larger consequences of the increasingly discernible future of robotics. It explores the increasing sophistication of robots and their widespread deployment into hospitals, public spaces, and battlefields. The book also explores how this requires rethinking of a wide variety of philosophical and public policy issues, including how this technology interacts with existing legal regimes.
- Cech, E. (2013). Culture of disengagement in engineering education? Science, Technology, & Human Values, 39(1), 42-72. https://doi.org/10.1177/0162243913504305
- The author uses a longitudinal survey to measure engineering students’ public welfare beliefs over time, whether engineering programs emphasize social engagement, and how this relates to students’ welfare beliefs. The author finds evidence for a “culture of disengagement”, suggesting that students may place little importance on ethical responsibilities, understanding the consequences of technology, understanding how people use machines, and social consciousness. The author suggests that engineering programs can change this culture in order to foster more engaged engineers.
- Chiu, M., et al. (2018, November). Applying artificial intelligence for social good. McKinsey Global Institute. https://www.mckinsey.com/featured-insights/artificial-intelligence/applying-artificial-intelligence-for-social-good
- This paper covers issues around AI. It offers a detailed analysis of how AI is not a silver bullet, but it could help tackle some of the world’s most challenging social problems. Topics include: mapping AI cases to domains of social good; AI capabilities that can be used for social good; overcoming bottlenecks and identifying risks to be managed; and scaling up the use of AI for social good.
- Consultation Group – Engineering Instruction and Accreditation. (2016). Graduate attributes. Canadian Engineering Accreditation Board. https://engineerscanada.ca/sites/default/files/Graduate-Attributes.pdf
- This document outlines the graduate attributes required for an accredited engineering program in Canada. This document highlights the importance of engineers who understand the impact of engineering on society, engineering ethics, and equity.
- D’Aquin, M., et al. (2018). Towards an “ethics by design” methodology for AI research projects. Proceedings of the 2018 AAAI/ACM conference on AI, Ethics and Society, New Orleans, USA. https://doi.org/10.1145/3278721.3278765
- This work aims to address a core challenge of AI in both industry and academia; the need to address the ethical challenges that arise in the design of AI systems. The authors argue that AI researchers themselves are not equipped with all of the necessary skills to identify and rectify these issues, and so they propose a design methodology that requires these skills from the start. They explore two case studies where such ethical considerations have been explored in specific research projects.
- Eadicicco, L., et al. (2017, April 3). The 20 most successful technology failures of all time. Time Magazine. http://time.com/4704250/most-successful-technology-tech-failures-gadgets-flops-bombs-fails/
- This is a list of failures that have led to success or may yet still lead to something world-changing, hence the labeling of the items on the list as technology’s most successful failed products. Like an experiment gone awry, they can still teach us something about technology and how people want to use it. This article covers both technical and social failures, and products including Napster, Blackberry, and AOL.
- Evans, R., & Collins, H. M. (2007).* Rethinking expertise. University of Chicago Press.
- The authors offer a new perspective on the role of expertise in science and evaluation of technology. They ask whether the public can make use of science and technology before the scientific community comes to a consensus. The authors develop a Periodic Table of Expertises based on the idea of tacit knowledge—knowledge that we have but cannot explain. They use this to explain how different forms of expertise are used, how some expertise is used to judge others, how lay people judge between experts, and how credentials are used to evaluate them.
- Felzmann, H., et al. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), https://doi.org/10.1177/2053951719860542
- The authors discuss the importance of transparency under the General Data Protection Regulation (GDPR). They present the pitfalls of the legal transparency requirements of GDPR and the lack of clarity on the benefits of increased transparency for end users. Finally, they propose a relational understanding of transparency focused on communication between the technology providers and users.
- Friedman, B., et al. (2002). Value sensitive design: Theory and methods. Washington State University Dept. of Computer Science & Engineering. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.11.8020&rep=rep1&type=pdf
- This work builds on the Value Sensitive Design framework, which is a technology design approach that accounts for human values throughout the design process. The authors draw on three projects –cookies in a web browser, projection technology in an office space, and an interface for integrated land use – to demonstrate the value of this framework.
- Friedman, B., & Kahn, P. H., Jr. (2003).* Human values, ethics, and design. In The human-computer interaction handbook (pp. 1177–1201). CRC Press. https://depts.washington.edu/hints/publications/Human_Values_Ethics_Design.pdf
- This article reviews how the field of human-computer interaction (HCI) has addressed the following topics: how values become implicated in technological design; distinguishing usability from human values with ethical import; and review of the major HCI approaches to key human values relevant for design; and special ethical responsibilities of HCI professionals.
- Greene, D., et al. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In T. X. Bui, Proceedings of the 52nd Hawaii international conference on system sciences, pp. 2122-2131. https://aisel.aisnet.org/hicss-52/dsm/critical_and_ethical_studies/2/
- The authors examine several high-profile value statements on the ethical use of artificial intelligence and machine learning under the lens of design theory and the sociology of business ethics. They demonstrate that while these statements share framing from critical methodologies in science and technology studies, they are missing a focus on social justice and equity.
- Hvistendahl, M. (2017, December 14). Inside China’s vast new experiment in social ranking.WIRED.https://www.wired.com/story/age-of-social-credit/
- This article delves into how China is taking the idea of a credit score to the extreme. By using big data to track and rank what its citizens do—including purchases, pastimes, and mistakes—China is able to take its practice of social engineering to a new level in the 21st century. In order to illustrate the impact of China’s use of technology on individual lives, the author provides a detailed account of her and her friend’s experiences of living within this system over a period of several years.
- Kudina, O., & Verbeek, P. P. (2019). Ethics from within: Google Glass, the Collingridge dilemma, and the mediated value of privacy. Science, Technology, & Human Values, 44(2), 291-314. https://doi.org/10.1177/0162243918793711
- This study investigates how people characterize the value of privacy for Google Glass based on online discussions. The authors focus on how the meaning of this value changed once the Google Glass was deployed, even in its limited fashion, compared to before the product was deployed. This is inspired by the “control dilemma” of Collingridge, a characterization of situations where it is easy to influence technological developments before they are deployed, but is much harder afterward.
- LaCroix, T., & Bengio, Y. (2019). Learning from learning machines: Optimisation, rules, and social norms. arXiv:2001.00006
- The authors present and explore the analogy between AI and economic entities. They demonstrate how findings in economics research may provide solutions towards AI safety and how findings in AI research can help inform economic policy. The authors stipulate that these results may demonstrate that understanding behaviors that both AI and economic policy aim to explain may be better done through norms.
- Latour, B., (1992). Where are the missing masses? The sociology of a few mundane artifacts. In W. E. Bijker & J. Law (Eds.), Shaping Technology/Building Society (pp. 225-258). MIT Press.
- The author explores how artifacts can be deliberately designed to both replace human action and constrain and shape the actions of other humans. The study demonstrates how people can ‘‘act at a distance’’ through the technologies they create and implement, and how, from a user’s perspective, technology can appear to determine or compel certain actions. The author argues that we cannot understand how societies work without an understanding of how technologies shape our everyday lives.
- Latour, B. (1999).* Pandora’s hope: Essays on the reality of science studies. Harvard University Press.
- This collection of essays investigates the relationship between humans and natural or artifactual objects. The author offers an argument for understanding the reality of science in practical terms. Through case studies in the world of technology, the author shows how the material and human world come together and are reciprocally transformed into items of scientific knowledge.
- Lin, P., et al. (Eds.) (2017).* Robot Ethics 2.0. Oxford University Press.
- Robot Ethics 2.0 studies the ethical, legal, and policy impacts of robots which have been taking on morally important human tasks and decisions as well as creating new risks. This book focuses on issues related to autonomous cars as an important case study that cuts across diverse issues including psychological, legal, trust, physical, etc. They review relevant and important considerations such as responsibility, trust, and ethics.
- Metz, R. (2014, November 26). Google Glass is dead; Long live smart glasses. MIT Technology Review. https://www.technologyreview.com/2014/11/26/169918/google-glass-is-dead-long-live-smart-glasses/
- This article argues that although Google’s head-worn computer is going nowhere, the technology is sure to march on as intriguing possibilities remain. The author evaluates the reasons for Google Glass’s failure and investigates some potential uses for a smart glass device including serving as a memory aid and productivity enhancer.
- Millar, J., et al. (2020). A framework for addressing ethical considerations in the engineering of automated vehicles (and other technologies). Proceedings of the Design Society: DESIGN Conference, Online. https://doi.org/10.1017/dsd.2020.78
- This work proposes a framework that will allow engineers and designers of automated technology to identify and reason with ethical implications within the design task. The authors report on a demonstration of the feasibility of this framework by having engineers apply it during a workshop.
- Millar, J. (2015). Technology as moral proxy: Autonomy and paternalism by design. IEEE Technology and Society Magazine, 34(2), 47-55. 10.1109/ETHICS.2014.689338
- The author argues that technology is not morally neutral, as many claim. The author states that technological artifacts can act as moral proxies for their user when they are answering moral questions. As part of this argument, the moral link between designers, artifacts, and users is discussed, particularly in the areas of healthcare, bioethics, and design.
- Norman, D. (2013). The design of everyday things revised and expanded edition. Basic Books.
- This book, originally published in 1988, argues that being unable to use a device or object is caused by a design that ignores the needs and psychology of people. Focused on the design of physical products, the author outlines failures in the design of doors, sinks, and coffee pots, and proposes strategies for designing with the user in mind.
- Pearson, C., & Delatte, N. (2006). Collapse of the Quebec bridge, 1907. Journal of Performance of Constructed Facilities, 20(1), 84-91.
- The authors describe the grave implications of the failure of man-made artifacts as a result of physical defects not fully accounted for in their design. They examine the collapse of the Quebec Bridge over the St. Lawrence River in 1907 where seventy-five workers were killed. The authors discuss the investigation of the disaster and the finding that the main cause of the bridge’s failure was improper design by the consulting engineer.
- Peters, D., et al. (2020). Responsible AI – two frameworks for ethical design in practice. IEEE Transactions on Technology and Society, 1(1), 34-47. 10.1109/TTS.2020.297499
- Prompted by the development of new standards for the design of autonomous and intelligent systems, the authors describe two complementary frameworks that help engineers move from principles to practice. They evaluate these frameworks using an ethical analysis on an internet-delivered therapy product. They suggest that these frameworks can be integrated into the engineering design process for the development of intelligent technology.
- Pogue, D. (2013, June 1). Why Google Glass is creepy. Scientific American. https://www.scientificamerican.com/article/why-google-glass-is-creepy/
- This Scientific American article argues that the biggest obstacle to social acceptance of the new technology is “the smugness of people who wear Google Glass and the deep discomfort of everyone who does not.” The author discusses how wearing the glasses can make people you want to talk to uncomfortable, as you may be recording them at any time.
- Rismani, S., & Moon, A. (2021, October 28-31). How do AI systems fail socially?: An engineering risk analysis approach. IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS), Waterloo, Ontario, Canada. 10.1109/ETHICS53270.2021.9632769
- This paper explores how a common tool for engineering risk analysis, Failure Mode and Effect Analysis (FMEA), may be used by machine learning practitioners. Specifically, the authors argue that FMEAs can identify a broad range of failures, including social and ethical failures. They propose a process for developing a social FMEA based on the definition of Social Failures provided in this chapter.
- Russell, W., et al. (2012). Technology assessment in social context: The case for a new framework for assessing and shaping technological developments. Impact Assessment and Project Appraisal, 28(2), 109-116. https://doi.org/10.3152/146155110X498843
- The authors argue that because social impacts are core dimensions of technology development, and technological developments are driven by normative visions for society, the traditional expert assessments of technology are not enough. The authors also argue that participatory design on its own is not enough, and they propose a new method called the Technology Assessment in Social Context, which takes a social systems approach to technology assessment.
- Van den Hoven, J., et al. (Eds). (2014).* Responsible innovation 1: Innovative solutions for global issues. Springer.
- The authors address the methodological issues involved in responsible innovation and provide an overview of recent applications of multidisciplinary research involving close collaboration between researchers in diverse fields such as ethics, social sciences, law, economics, and applied science and engineering. This book delves into the ethical and societal aspects of new technologies and changes in technological systems.
- Verbeek, P. P. (2006). Materializing morality: Design ethics and technological mediation. Science, Technology, & Human Values, 31(3), 361-380. https://doi.org/10.1177/0162243905285847
- This article deploys the “script” concept, indicating how technologies prescribe human actions, in a normative setting. The author explores the implications of the insight that engineers materialize morality by designing technologies that co-shape human actions. The author augments the script concept by developing the notion of technological mediation and its impact on the design process and design ethics.
- Vincent, J. (2016, March 24). Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. The Verge. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
- The author outlines how it took less than 24 hours for Twitter to corrupt an innocent AI chatbot named Tay. Tay, by being a robot parrot with an internet connection, started repeating peoples’ misogynistic, racist, and Donald Trump-like remarks back to users. This article raises serious questions about AI embodying the prejudices of society.
- Wolf, M. J., et al. (2017). Why we should have seen that coming: Comments on Microsoft’s Tay “experiment,” and wider implications. ACM SIGCAS Computers and Society, 47(3), 54-64. https://doi.org/10.1145/3144592.3144598
- The authors analyze Tay, the Microsoft chatbot, as a case study for a larger problem with AI software that interacts with the public. The authors focus on how developers are responsible for these interactions, advocating for additional ethical responsibilities for developers when their AI software will interact with the public or social media.
- Zeeberg, A. (2020, January). What we can learn about robots from Japan. BBC. https://www.bbc.com/future/article/20191220-what-we-can-learn-about-robots-from-japan
- This article discusses the contrast between the philosophical traditions of the West and the Japanese Shinto-based philosophical view that makes no categorical distinction between humans, animals, and objects such as robots. This contrast demonstrates that while the West tends to see robots and artificial intelligence as a threat, Japan’s view has led to its complex relationship with machines including the positive view of technology that is rooted in Japan’s socioeconomic, historical, religious, and philosophical perspectives.
- Zuidhof, N., et al. (2019). A theoretical framework to study long-term use of smart eyewear. In R. Harle, K. Farrahi, & N. Lane (Eds.), Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (pp. 667-670). Association for Computing Machinery. https://doi.org/10.1145/3341162.3348382
- The authors provide a theoretical framework that combines perspectives from philosophy, psychology, science and technology studies, and information systems, to study the benefits and harms of using smart eyewear. Their framework is made up of four phases: (1) adoption, (2) influence, (3) re-applying, and (4) behavioral change. Together these phases help explain if the individual will use the technology and how they will interact with the technology, others, and the larger world around them.
Chapter 24. A Human-Centred Approach to AI Ethics: A Perspective from Cognitive Science (Ron Chrisley)
- Alaieri, F., & Vellino, A. (2016). Ethical decision making in robots: Autonomy, trust, and responsibility. In International conference on social robotics (pp. 159-168). Springer. https://doi.org/10.1007/978-3-319-47437-3_16
- The authors argue that in order to get people to trust autonomous robots, the ethical principles employed by these autonomous robots must be made transparent.
- Aroyo, A. M., et al. (2018). Trust and social engineering in human-robot interaction: Will a robot make you disclose sensitive information, conform to its recommendations or gamble? IEEE Robotics and Automation Letters, 3(4), 3701-3708. https://doi.org/10.1109/LRA.2018.2856272
- This research study examines how robots could be used for social engineering. The researchers found that people do build trust with robots, which can lead to the voluntary disclosure of private information.
- Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34. https://doi.org/10.1016/j.cognition.2018.08.003
- This article examines nine studies that suggest that humans do not want autonomous machines to make moral decisions. The authors argue that this aversion to machine moral decision-making will prove challenging to eliminate as designers seek to employ machines in medicine, law, transportation, and defence.
- Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
- This book is a comprehensive philosophical analysis of the future of AI, with a focus on the possibility of super intelligence becoming the dominant life form on Earth. It includes arguments against the view that “robots cannot be harmed”, notable to this chapter on perspectives from cognitive science for its discussions on mind crimes against sentient, conscious machines.
- Bourgin, D. D., et al. (2019). Cognitive model priors for predicting human decisions. In Proceedings of the 36th International Conference on Machine Learning, 5133–5141. https://proceedings.mlr.press/v97/peterson19a.html
- This paper investigates robust, high-precision, predictive models of human decision-making with machine learning and argues that the difficulty of this problem is the limited scale and quality of datasets on human behavior. This paper then constructs “cognitive model priors” to mitigate this problem, by pretraining neural networks with synthetic data. This paper also contributed the first large-scale dataset for human decision-making.
- Broadbent, E. (2017). Interactions with robots: The truths we reveal about ourselves. Annual Review of Psychology, 68, 627-652. https://doi.org/10.1146/annurev-psych-010416-043958
- This article examines human-robot relations from the perspective of cognitive science. The author argues that there is a need to study human feelings towards robots and argues that this study will reveal insights into human psychology, such as human tendency to have an uncanny feeling towards robotic machines.
- Darling, K. (2015). “Who’s Johnny?” Anthropomorphic framing in human-robot interaction, integration, and policy. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.
- This paper considers the benefits and drawbacks of anthropomorphized robots. It argues that in some cases, anthropomorphic framing is helpful as it increases the functionality of the technology. However, the paper argues that emotional relationships between humans and robots could make people vulnerable to emotional manipulation.
- Datteri, E. (2013). Predicting the long-term effects of human-robot interaction: A reflection on responsibility in medical robotics. Science and Engineering Ethics, 19(1), 139-160.
- This paper considers two existing robots: one named Da Vinci, which is used for medical surgery; and another named Lokomat, which is used for walking rehabilitation. The author claims that issues of responsibility regarding injury are mostly problems that can be overcome by better engineering and more training. This raises questions about what kind of harm thresholds can be tolerated as ethical dilemmas expand beyond assigning blame.
- de Graaf, M. M. A. (2016). An ethical evaluation of human–robot relationships. International Journal of Social Robotics, 8(4), 589-598. https://doi.org/10.1007/s12369-016-0368-5
- The author discusses the ethical considerations of human-robot relationships, considering if and how these relationships could contribute to the good life. She argues that research of human social interaction with robots is needed to flesh out ethical, societal, and legal perspectives, and to design and introduce responsible robots.
- de Graaf, M. M. A., et al. (2019). Why would I use this in my home? A model of domestic social robot acceptance. Human–Computer Interaction, 34(2), 115-173.
- This article presents a conceptual model of social robot acceptance tested among the general Dutch population using structural equation modeling. The authors indicate that the results demonstrate the strong role of normative beliefs that directly and indirectly affect the anticipated acceptance of social robots for domestic purposes and that, generally, people seem reluctant to accept social behaviors from robots.
- Fossa, F. (2018). Artificial moral agents: Moral mentors or sensible tools? Ethics and Information Technology, 20(2), 115-126. https://doi.org/10.1007/s10676-018-9451-y
- This paper analyzes how the concept of an artificial moral agent (AMA) impacts human self-understanding of themselves as moral agents. The author presents the Continuity Approach and contrary Discontinuity Approach. The Continuity Approach argues that AMAs and humans should be considered homogenous moral entities. The Discontinuity Approach argues that there is an important essential difference between humans and AMAs. The author argues that the Discontinuity Approach better encapsulates the definition of AMAs, how we should deal with the moral tensions they cause, and the difference between machine ethics and moral philosophy.
- Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411-437.
- This paper argues that moral beliefs vary across populations and therefore aligning AI values with human values would vary depending on context. It analyzes what alignment means in a deep sense and proposes ways fair principles could be achieved by considering existing moral frameworks such as the veil of ignorance and social choice theory.
- Gaudiello, I., et al. (2016). Trust as an indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Computers in Human Behavior, 61, 633-655. https://doi.org/10.1016/j.chb.2016.03.057
- The authors present an experiment between 56 participants and a robot called iCub, which investigated whether trust in a robot’s function was a prerequisite for social acceptance and to what extent social features like participant desire to control affected trust in iCub. The study found that participants were more likely to agree with iCub’s decisions in functional tasks rather than social ones. They conclude that functional ability is not a prerequisite for trust in social ability.
- Kahn, P. H., et al. (2006). What is a human? Toward psychological benchmarks in the field of human-robot interaction. In ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication (pp. 364-371). Institute of Electrical and Electronics Engineers.
- This paper introduces benchmarks for capturing fundamental aspects of human life, with the goal of transferring these characteristics to robots. Some of the principles considered, such as moral accountability and reciprocity, can facilitate ethical behavior in AI systems.
- Kahn, P. H., et al. (2012). Do people hold a humanoid robot morally accountable for the harm it causes? In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction (pp. 33-40).
- This article discusses the increasingly significant roles that robots will take in the social lives of humans and the potential corresponding harms. The authors study whether people hold robots morally accountable by having a humanoid robot talk to 40 undergraduate students and prevent the participants from winning a small prize after incorrectly assessing their performance. The results demonstrated that all the participants engaged socially with the robot and the majority of them attributed a level of moral accountability to the robot. They considered the robot less accountable than they would a human, but more accountable than a vending machine.
- Laidlaw, C., & Russell, S. (2021). Uncertain decisions facilitate better preference learning. Advances in Neural Information Processing Systems, 34, 15070–15083.
- This paper investigates human preference learning in the framework of inverse decision theory (IDT) where humans make decisions under uncertainty. Authors showed that, counterintuitively, it is easier to learn human preferences when the decision problem is more uncertain.
- Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529
- This report suggests that the pace at which AI advances, as well as the difficulty in understanding increasingly complex intelligent agents, heightens the need for anticipating and creating response plans to address potentially harmful effects of this technology. It gives practical advice for the British public sector regarding the need for AI interpretability, evidence-based reasoning, and moral justifiability in promoting safe and ethical AI.
- Malle, B. F., et al. (2015).* Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In 10th ACM/IEEE International Conference on Human-Robot Interaction (pp. 117-124).
- The authors argue that explicit ethical mechanisms must be incorporated as autonomous robots will inevitably end up in situations wherein an ethical choice must be made. They outline several requirements for these ethical mechanisms.
- Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology, 18(4), 243-256. https://doi.org/10.1007/s10676-015-9367-8
- This article examines the connection between robot ethics and machine morality, arguing that robots can be designed with moral characteristics similar to those of humans. Consequently, these robots can contribute to society as ethically competent humans do.
- Malle, B. F., & Scheutz, M. (2019). Learning how to behave. In O. Bendel (Ed.), Handbuch maschinenethik (pp. 255-278). Springer. https://doi.org/10.1007/978-3-658-17483-5_17
- The authors present a framework for developing robotic moral competence, composed of five features: two constituents (moral norms and moral vocabulary), and three activities (moral judgement, moral action, and moral communication).
- Malle, B. F., et al. (2019). AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma. In Robotics and well-being (pp. 111-133). Springer.
- This chapter discusses three survey studies that presented participants with an artificial intelligence agent, an autonomous drone, or a human drone pilot facing a moral dilemma in a military context. The dilemma provided was the choice to either launch a missile strike on a terrorist compound but risk the life of a child, or to cancel the strike to protect the child but risk a terrorist attack. People ascribed different patterns of blame to humans and machines as a function of the agent’s decision of how to solve the dilemma.
- Milli, S., et al. (2017, August). Should robots be obedient? In Proceedings of the 26th International Joint Conference on Artificial Intelligence (pp. 4754-4760).
- This paper argues that when humans are not perfectly rational and not aligned with their preferences, then robots should not be strictly obedient and follow their literal orders. Instead, robots should infer and act according to humans’ underlying true preferences. This shows that in the described situation, strict obedience would hurt performance and shows how this trade-off is influenced by how the robot infers the humans’ preferences.
- Moor, J. (2009).* Four kinds of ethical robots. Philosophy Now, 72(12), 12-14.
- The author argues that there are at least four distinct types of ethical robots. First, ethical impact agents, which perform actions that have ethical consequences regardless of the machine’s intention. Second, implicit ethical agents, which are designed to have built in ethical actions. Third, explicit ethical agents, which can make ethical determinations themselves. Fourth, full ethical agents, which can make ethical determinations, but also have features associated with human ethical agents, including consciousness, intentionality, and free-will.
- Norman, D. (2014). Things that make us smart: Defending human attributes in the age of the machine. Diversion Books.
- This book discusses the complex interactions between humans and machines and how, as cognitive artefacts, they can make humans smarter. However, they can also shape how humans think and influence what humans value. The author argues for the case of human-centered design.
- Pino, M., et al. (2015). “Are we ready for robots that care for us?” Attitudes and opinions of older adults toward socially assistive robots. Frontiers in aging neuroscience, 7, 141.
- This article discusses how socially assistive robots may help improve care delivery at home for older adults with cognitive impairment and reduce the burden on caregivers. Questions regarding the robot and user characteristics, potential applications, feelings about technology, ethical issues, and barriers and facilitators for adoption are addressed. The authors note the importance of customizing robot appearance, services, and social capabilities to avoid barriers to adoption such as need and solution mismatch, usability factors, and lack of technological experience.
- Plonsky, O., et al. (2019). Predicting human decisions with behavioral theories and machine learning. arXiv:1904.06866.
- In this paper, the authors studied the problem of predicting human choices. They analyzed the literature and the results of an open tournament for the prediction of human choices. The key insight is that foresight (researchers’ integration of behavioral insights) might not be as useful for prediction as the insights they are composed of for machine learning systems to make better predictions of human decisions.
- Riek, L., & Howard, D. (2014). A code of ethics for the human-robot interaction profession. In Proceedings of We Robot 2014. https://robots.law.miami.edu/2014/wp-content/uploads/2014/03/a-code-of-ethics-for-the-human-robot-interaction-profession-riek-howard.pdf
- This article argues that the rights and protections present in human-to-human interaction should also exist for human-to-robot interaction. It outlines a prime directive of principles that ensure human dignity, respect for human frailty, predictability in robot behavior, and diverse morphologies.
- Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Penguin.
- This book is a comprehensive introduction to the problems of control and value alignment of AI. The author considers the perspective that it is not necessary to equip machines with “ethics” or “moral values,” but that AI should be an assistant to help humans achieve their goals, values, and preferences.
- Russell, S., et al. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105-114.
- This paper outlines the many potential benefits of AI, while also making the reader aware of the dangers presented by this technology, whose bounds we still do not understand. It provides guidance on how to build safe and robust AI models.
- Sarathy, V., et al. (2017). Learning behavioral norms in uncertain and changing contexts. In 8th IEEE International Conference on Cognitive Infocommunications (pp. 301-306).
- This article presents the problem of presenting norms to algorithms, taking into consideration that humans are often uncertain and vague when it comes to moral norms. Using deontic logic, Dempster Shafer Theory, and a machine learning algorithm that teaches AI norms using uncertain human data, the authors demonstrate a novel capacity for AIs to learn about morality, using context clues to provide nuance.
- Scheutz, M., & Malle, B. F. (2014).* “Think and do the right thing”—A plea for morally competent autonomous robots. In 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering (pp. 1-4).
- The authors argue that it is vital to incorporate explicit ethical mechanisms that enable moral virtue in autonomous robots in light of their frequent use in ethically charged scenarios.
- Scheutz, M., et al. (2015). Towards morally sensitive action selection for autonomous social robots. In 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (pp. 492-497). https://doi.org/10.1109/ROMAN.2015.7333661
- The authors argue that autonomous social robots must be taught to anticipate norm violations and seek to prevent them. If such situations cannot be prevented in a given context, robots must be able to justify their actions. The authors present an action execution system as a potential solution to this problem.
- Scheutz, M. (2017). The case for explicit ethical agents. AI Magazine, 38(4), 57-64. https://doi.org/10.1609/aimag.v38i4.2746
- Scheultz presents his case for the development of what Moor calls explicit ethical agents. He argues that although machine ethics is a growing field, more work needs to be done to create cognitive architecture that can judge situations based on morality, for both humans and robots.
- Stange, S., & Kopp, S. (2020). Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 619-627). https://doi.org/10.1145/3319502.3374802
- This paper investigates whether a robot’s ability to explain its own behaviour affects user perception of that behaviour. The authors find that all types of explanation strategies increased understanding and acceptance of robot behaviour.
- Tavani, H. T. (2018). Can social robots qualify for moral consideration? Reframing the question about robot rights. Information, 9(4), 73. https://doi.org/10.3390/info9040073
- The author suggests that current debates on whether robots can have rights are limited because they do not explicitly define which robots would qualify and what specific rights are at stake. She suggests that the question of whether robots should have rights should be framed as asking whether some social robots qualify for moral consideration as moral patients. Tavani argues that they should.
- Torrence, S., & Chrisley, R. (2015).* Modelling consciousness-dependent expertise in machine medical moral agents. In P. van Rysewyk & M. Pontier (Eds.), Machine medical ethics (pp. 291-316). Springer.
- This article examines the limitations of current AI designs, stating that current models for medical AI systems fail to account for machine consciousness, thereby limiting their ethical functionality. The authors argue machine consciousness plays a vital role in moral decision-making, and thus it would be prudent for AI designers to think about consciousness when creating these machines.
- van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics, 25(3), 719-735. https://doi.org/10.1007/s11948-018-0030-8
- This article examines issues relating to the development of artificial moral agents (AMAs) and argues that ethicists have yet to provide good arguments for the development of such machines. The authors argue that the development of AMAs should not continue until such arguments are given.
- Vanderelst, D., & Winfield, A. (2018). An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research, 48, 56-66.
- This article puts forward a method to implement ethical behavior in robots inspired by the simulation theory of cognition. The authors argue that, unlike other existing frameworks, the proposed approach does not rely on the verification of logic statements, but instead uses internal simulations which allow the robot to simulate actions and predict the corresponding consequences in a form of robotic imagery. The authors demonstrate the results through a humanoid robot that behaved according to Asimov’s laws of robotics and demonstrate that their method enables the robot to prevent humans from coming to harm in simple test scenarios.
- Wiese, E., et al. (2017). Robots as intentional agents: Using neuroscientific methods to make robots appear more social. Frontiers in psychology, 8, 1663.
- This article discusses the limitations of the ability of robots to interact with humans in an intuitive, social manner. The authors argue that the best way to achieve better interaction is to use a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion-tracking, eye-tracking, electroencephalography, and functional near-infrared spectroscopy embedded into interactive paradigms. This approach consists of understanding human interaction, collaboration, and connections over time. The authors put forward the argument that artificial agents can be better seen as social companions if they are designed in a way that activates the areas of the human brain that are involved in social-cognitive processing.
- Yampolskiy, R. V. (2013). Artificial intelligence safety engineering: Why machine ethics is a wrong approach. In V. Muller (Ed.), Philosophy and theory of artificial intelligence (pp. 389-396). Springer.
- The author argues that giving machines rights or allowing them to make ethical decisions should not be the top priority. Instead, the scientific community should focus on formally verifiable systems that are demonstrably safe in the presence of self-improvement because AI is a dynamic technology.
- Yu, H., et al. (2018). Building ethics into artificial intelligence. In J. Lang (Ed.), Proceedings of the 27th International Joint Conference on Artificial Intelligence (pp. 5527-5523). AAAI Press.
- This article conducts a thorough analysis of existing discussions about ethical decision-making by AI. Four main topics are investigated, including ethical dilemmas such as trolley problems involving autonomous vehicles and cases where AI can influence human behavior and potentially decrease autonomy. This analysis paves the way for a discussion about how to integrate AI systems into society.
- Ziemke, T. (2008). On the role of emotion in biological and robotic autonomy. BioSystems, 91(2), 401-408. https://doi.org/10.1016/j.biosystems.2007.05.015
- This article discusses the difference between the autonomy of biological beings and the autonomy of robots from the perspective of cognitive science. The authors argued that by the narrow sense of constitutive biological autonomy, robots are not autonomous, nor will they become autonomous any time soon.
Chapter 25. Integrating Ethical Values and Economic Value to Steer Progress in Artificial Intelligence (Anton Korinek)
- Acemoglu, D., & Restrepo, P. (2019).* The wrong kind of AI? Artificial intelligence and the future of labor demand (NBER Working Paper 25682). National Bureau of Economic Research. https://www.nber.org/papers/w25682
- This paper argues that recent technological change has been biased towards automation, with insufficient focus on creating new tasks where labor can be productively employed. The authors suggest that the consequences of this choice have been stagnating labor demand, declining labor share in national income, rising inequality, and lower productivity growth. The authors argue that the current tendency to develop AI in the direction of further automation could lead to missing out on the promise of the “right” kind of AI with better economic and social outcomes.
- Adomavicius, G., et al. (2018). Effects of online recommendations on consumers’ willingness to pay. Information Systems Research, 29(1), 84-102.
- The authors investigate how recommendation systems can affect our judgments. They show that the ratings of the recommendation system changes the consumer’s willingness to pay for music songs. They test their hypothesis in three different settings, indicating the impact of these systems on our brains.
- Agrawal, A., et al. (2019). Economic policy for artificial intelligence. Innovation Policy and the Economy, 19(1), 139-159.
- The authors argue that policy will influence the impact of artificial intelligence on society in two key dimensions: diffusion and consequences. First, in addition to subsidies and intellectual property (IP) policy that will influence the diffusion of AI in ways similar to their effect on other technologies, the article presents three policy categories—privacy, trade, and liability—as uniquely salient in their influence on the diffusion patterns of AI. Second, the authors suggest labor and antitrust policies will influence the consequences of AI in terms of employment, inequality, and competition.
- Astobiza, A. M., et al. (2021). AI ethics for sustainable development goals. IEEE Technology and Society Magazine, 40(2), 66–71. https://doi.org/10.1109/MTS.2021.3056294
- The authors consider how AI development can further the 2030 Sustainable Development Goals (SDGs), which are based on interdependent ecological, social, and economic dimensions. They propose that AI projects employ “professional philosopher[s]” or ethicists to support the advancement of SDGs within a project. They also advocate global governance of AI in the context of SDGs, with this governance led by supranational bodies such as the United Nations.
- Autor, D. H., et al. (2003). The skill content of recent technological change: An empirical exploration. The Quarterly Journal of Economics, 118(4), 1279-1333. https://doi.org/10.1162/003355303322552801
- The authors perform an empirical measurement of how the rapid adoption of computers in the workplace impacted labor between 1960 and 1998. They argue that human performance of analytic routine tasks, such as calculation, and manual routine tasks, such as part assembly, can be significantly substituted by computers. Computers also strongly complement the human performance of nonroutine analytic tasks, such as medical diagnosis. The authors use econometric models to demonstrate that substitution and complementarity have driven changes in labor demand as computer capital became more affordable.
- Autor, D. (2015).* Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30.
- This article argues that the polarization of the labor market is unlikely to continue very far into the future, reflecting on how recent and future advances in artificial intelligence and robotics should shape our thinking about the likely trajectory of occupational change and employment growth. The authors argue that the interplay between machine and human comparative advantage allows computers to substitute for workers in performing routine, codifiable tasks while amplifying the comparative advantage of workers in supplying problem-solving skills, adaptability, and creativity.
- Bolton, C., et al. (2018). The power of human-machine collaboration: Artificial intelligence, business automation, and the smart economy. Economics, Management, and Financial Markets, 13(4), 51-56.
- This article reviews and advances existing literature concerning the power of human–machine collaboration. Using and replicating data from Accenture, BBC, CellStrat, eMarketer, Frontier Economics, MIT Research Report, Morar Consulting, PwC, and Squiz, the authors perform analyses and makes estimates regarding the impact of artificial intelligence (AI) on industry growth including: real annual GVA growth by 2035 (%), how AI could change the job market: estimated net job creation by industry sector (2017–2037), reasons given by global companies for AI adoption, and leading advantages of AI for international organizations.
- Bostrom, N. (2014).* Superintelligence: Paths, dangers, strategies. Oxford University Press.
- This book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. The author argues that sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.
- Brynjolfsson, E., & McAfee, A. (2015).* The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton.
- This book identifies the best strategies for survival and offers a new path to prosperity in the midst of unprecedented technological and economic change. The authors’ suggestions include revamping education so that it prepares people for the next economy instead of the last one, designing new collaborations that pair brute processing power with human ingenuity, and embracing policies that make sense in a radically transformed landscape.
- Brynjolfsson, E., et al. (2019). Does machine translation affect international trade? Evidence from a large digital platform. Management Science, 65(12), 5449-5460.
- Using data from a digital platform, the authors study machine translation and find that the introduction of a new machine translation system has significantly increased international trade on this platform, increasing exports by 10.9%. Furthermore, their study found that heterogeneous treatment effects are consistent with a substantial reduction in translation costs. The authors argue that the results of this study provide causal evidence that language barriers significantly hinder trade and that AI has already begun to improve economic efficiency in at least one domain.
- Ernst, E., et al. (2019). Economics of artificial intelligence: Implications for the future of work. IZA Journal of Labor Policy, 9(1), 7-72.
- This paper discusses the rationales for fears of widespread job loss due to artificial intelligence, comparing this technology to previous waves of automation. The authors argue that large opportunities in terms of increases in productivity can ensue, including for developing countries, given the vastly reduced costs of capital that some applications have demonstrated and the potential for productivity increases, especially among the low-skilled. To address the risk of increasing inequality, the authors call for new forms of regulation for the digital economy.
- Floridi, L. (2016). Should we be afraid of AI? Aeon Essays. https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible
- The author argues against fears that AI will achieve superintelligence. The author claims that AI is unable to generalize beyond mundane and trivial tasks, and points to its continued inability to pass simple Turing Tests as evidence against the AI singularity. They further caution that placing too much emphasis on superintelligence distracts from concrete social issues both exacerbated and alleviated by AI, such as economic inequality.
- Frey, C. B. (2019). The technology trap: Capital, labor, and power in the age of automation. Princeton University Press.
- From the Industrial Revolution to the age of artificial intelligence, this book examines the history of technological progress and how it has radically shifted the distribution of economic and political power among society’s members. Just as the Industrial Revolution eventually brought about extraordinary benefits for society, the author argues that artificial intelligence systems have the potential to do the same.
- Giroux, M., et al. (2022). Artificial intelligence and declined guilt: Retailing morality comparison between human and AI. Journal of Business Ethics. doi:10.1007/s10551-022-05056-7
- The authors study people’s morality in a retail purchase when interacting with a machine compared to humans. Their study finds that people have less moral intention and guilt in interactions with machines.
- Grace, K., et al. (2018). When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729-754. https://doi.org/10.1613/jair.1.11222
- This article conducts a large-scale survey of AI and machine-learning experts to develop estimates of when AI development will reach key milestones, such as the replacement of humans in jobs demanding higher levels of skill and expertise. The authors find that researchers believe AI will be capable of writing bestselling books and working as surgeons by 2053, and even potentially automating all human jobs within 120 years. However, individual estimates vary substantially, and few experts believe that superintelligence will be achieved in the near future.
- Graetz, G., & Michaels, G. (2018). Robots at work. Review of Economics and Statistics, 100(5), 753-768. https://doi.org/10.1162/rest_a_00754
- In this study, the authors investigate the economic implications of the widespread adoption of industrial robots by analyzing data from 17 developed countries between 1993 and 2007. They find that the increased use of robotics accounts for 15% of the productivity growth in these economies over the time period. Furthermore, there is evidence that robot densification is associated with higher average wages and no significant changes in working hours on aggregate. However, when separately analyzing workers of different skill levels, the negative impact on low-skilled workers was offset by the gains received by medium- and high-skilled workers.
- Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
- This book explores the origins and ramifications of the “ghost work” employed by Big Tech corporations. To support the operation of their vast online platforms and services, these corporations use a hidden labor force to perform crowdsourced microtasks such as data labeling, content moderation, and service fine-tuning. Employment through ghost work, the authors argue, arises paradoxically out of the development of AI-based automation that otherwise threatens traditional labor. In turn, growing concerns about this new underclass of workers need to be addressed, such as accountability, trust, and insufficient regulation of on-demand work.
- Hermann, E. (2021). Leveraging artificial intelligence in marketing for social good—An ethical perspective. Journal of Business Ethics. doi:10.1007/s10551-021-04843-y
- This paper reviews principles introduced in the literature of AI and AI in marketing, and analyzes the connections among the found principles in the context of marketing. Based on this analysis, the authors also discuss how AI can be used in marketing in a way that respects social well-being.
- Korinek, A. (2019).* The rise of artificially intelligent agents. University of Virginia.
- This paper develops an economic framework that describes humans and Artificially Intelligent Agents (AIA) symmetrically as goal-oriented entities that each (i) absorb scarce resources, (ii) supply their factor services to the economy, (iii) exhibit defined behavior, and (iv) are subject to specified laws of motion. After introducing a resource allocation frontier that captures the distribution of resources between humans and machines, the author describes several mechanisms that may provide AIAs with autonomous control over resources, both within and outside of our human system of property rights. The author argues that in the limited case of an AIA-only economy, AIAs both produce and absorb large quantities of output without any role for humans, rejecting the fallacy that human demand is necessary to support economic activity.
- Korinek, A., & Stiglitz, J. (2019).* Artificial intelligence and its implications for income distribution and unemployment. In A. Agrawal, J. Gans, & A. Goldfarb (Eds.), The economics of artificial intelligence, (pp. 349–390). NBER and University of Chicago Press.
- This paper provides an overview of economic issues associated with artificial intelligence by discussing the general conditions under which these technologies may lead to a Pareto improvement, delineating the two main channels through which inequality is affected, and providing several simple economic models to describe how policy can counter these effects. Finally, the authors describe the two main channels through which technological progress may lead to technological unemployment and speculate on how technologies to create super-human levels of intelligence may affect inequality.
- Korinek, A., & Stiglitz, J. E. (2021). Covid-19 driven advances in automation and artificial intelligence risk exacerbating economic inequality. BMJ, 372(367). https://doi.org/10.1136/bmj.n367
- This paper argues for the need for decision-makers to ensure that technological choices reflect human or ethical values, filling the gaps between human and market values. The authors suggest that the COVID-19 pandemic has increased the economic cost of physical contact between humans, thus accelerating advancements in and adoption of AI, particularly in healthcare. The authors highlight that automation will increase economic inequality, and propose that AI be developed to preserve and complement human roles of all educational levels, such as by acting as decision supports.
- Lauer, D. (2021). Facebook’s ethical failures are not accidental; they are part of the business model. AI and Ethics, 1(4), 395–403. doi:10.1007/s43681-021-00068-x
- This opinion paper argues that the negative social impacts of Facebook stem from its core business model. The author believes Facebook’s algorithms are focused on maximizing users’ engagement for the company’s financial profit, and the detrimental effects of the network are just side effects of this objective. The author challenges defenses of the company regarding their efforts to address the issues, and states that these efforts are not taken seriously enough due to their financial impact by reducing users’ engagement.
- Makridakis, S. (2017). The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46-60. https://doi.org/10.1016/j.futures.2017.03.006
- This article describes the rapid advances made in the development of AI technology and draws parallels to the industrial and digital revolutions over the preceding two centuries. The author analyzes potential outcomes characterized by four viewpoints of AI research: optimism, pessimism, pragmatism, and skepticism. Based on these comparisons, the author provides predictions for whether individual Big Tech firms will succeed, and on how the labor and economic landscape will be changed by increasing automation.
- Naidu, S., et al. (2019).* Economics for inclusive prosperity: An introduction. Economists for Inclusive Prosperity. http://www.econfip.org
- This article argues that political institutions in the United States favor higher-income individuals over lower-income individuals and ethnic majorities over ethnic minorities.The author describes how this is accomplished through a myriad of policies that impact who votes, allow for differential influence and access by the wealthy, structure voting districts to dilute the impacts of under-represented voters, and allow for the oversized influence of pro-business owner ideas through media and membership organizations.
- Raghavan, M., et al. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, (pp. 469-481). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372828
- In response to the increasing amount of public scrutiny on algorithmic hiring in the private sector, this paper conducts a qualitative survey of vendors providing AI-enhanced solutions for employee assessment. It takes note of the features analyzed by the vendors, how the vendors claim to have validated their results, and whether fairness is considered. The authors conclude with policy and technical recommendations for ensuring more effective, appropriate, and fair algorithmic hiring practices.
- Sen, A. (1987).* On ethics and economics. Blackwell Publishing.
- This book argues that welfare economics can be enriched by paying more explicit attention to ethics, and that modern ethical studies can also benefit from closer contact with economies. The author further argues that even predictive and descriptive economics can be helped by making more room for welfare-economic considerations in the explanation of behavior.
- Sunstein, C. R. (2015). The ethics of nudging. Yale Journal on Regulation, 32(2), 413-450. https://digitalcommons.law.yale.edu/yjreg/vol32/iss2/6
- The author defends behavioral nudge theory against criticism based on its apparent threats to human agency. The author argues first that nudges are inevitable and cannot be avoided, and second, that they are highly context-sensitive and cannot be considered universally unethical. Arguing they are a form of “libertarian paternalism,” the author claims that nudges promote better welfare, for example by guiding people towards healthier life choices. They suggest that nudges also preserve autonomy by enabling people to make better, informed decisions without explicitly constraining the decision-making process.
- Tegmark, M. (2017).* Life 3.0: Being human in the age of artificial intelligence. Knopf.
- This book discusses Artificial Intelligence (AI) and its impact on the future of life on Earth and beyond. The author discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology, and combinations thereof.
- Walton, N., & Nayak, B. S. (2021). Rethinking of Marxist perspectives on big data, artificial intelligence (AI) and capitalist economic development. Technological Forecasting and Social Change, 166. https://doi.org/10.1016/j.techfore.2021.120576
- This article considers the rise of AI within a Marxist framework of economic value, arguing that AI is a tool of capitalism but that AI simultaneously poses serious challenges to existing Marxist theory on concepts of labor, value, property, and product relations, in particular to the labor theory of value. The authors propose policies that regulate the use of AI and big data to protect labor and combat labor precarity, enhance social welfare, reduce risk, and advance human development.
- Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
- The author argues that the growing financial dominance of Big Tech encompasses a new form of capitalism founded on surveillance. While industrial capitalism focuses on the exploitation of human labor and natural resources, “surveillance capitalism” benefits from the monetization of behavioral data. This data is captured, analyzed, and optimized in an “instrumentarian” fashion for profit using a global, computational infrastructure. The author develops their argument through a historical analysis of the use of this infrastructure, or the “Big Other”, by both government agencies and Silicon Valley giants like Google and Facebook. The author argues that surveillance capitalism poses a fundamental threat to democratic values and institutions.
Chapter 26. Fairness Criteria through the Lens of Directed Acyclic Graphs: A Statistical Modeling Perspective (Benjamin R. Baer, Daniel E. Gilbert, and Martin T. Wells)
- Angwin, J., et al. (2016).* Machine bias: There’s software used across the country to predict future criminals: And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- This investigation by Pro-Publica revealed that risk scores for reoffending created by artificial intelligence algorithms and used in bail decisions in the United States are often unreliable and inaccurate. The investigation further found that these scores disproportionately find Black Americans to be at higher risk, alleging that the algorithms used to produce the scores are racially biased.
- Baeza-Yates, R., & Goel, S. (2019). Designing equitable algorithms for the web. In Companion Proceedings of The 2019 World Wide Web Conference (pp. 1296-1296).
- This paper provides an introduction to fair machine learning, beginning with a general overview of algorithmic fairness and then discussing these issues specifically in the context of the Web. To illustrate the complications of current definitions of fairness, the article relies on a variety of classical and modern ideas from statistics, economics, and legal theory. The authors discuss the equity of machine learning algorithms in the specific context of the Web, exposing different sources for bias and how they impact fairness, including data bias, biases that are produced by data sampling, the algorithms per-se, user interaction, and feedback loops that result from user personalization and content creation.
- Bareinboim, E., et al. (2014). Recovering from selection bias in causal and statistical inference. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (pp. 2410-2416).
- This paper provides complete graphical and algorithmic conditions for recovering conditional probabilities from selection biased data. The paper also provides graphical conditions for recoverability when unbiased data is available over a subset of the variables. Finally, the paper provides a graphical condition that generalizes the backdoor criterion and serves to recover causal effects when the data is collected under preferential selection.
- Barocas, S., et al. (2018).* Fairness and machine learning. http://www.fairmlbook.org
- This online textbook reviews the practice of machine learning, highlighting ethical challenges and presenting approaches to mitigate them. Specifically, the book focuses on the issue of fairness considering both technical interventions and deeper questions concerning power and accountability in machine learning.
- Bellamy, R. K., et al. (2019). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. IBM Journal of Research and Development, 63(4/5), 1-15. https://doi.org/10.1147/JRD.2019.2942287
- This paper introduces an open-source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license. The main objectives of this toolkit are to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms.
- Chouldechova, A. (2017).* Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
- This paper discusses a fairness criterion originating in the field of educational and psychological testing that has recently been applied to assess the fairness of recidivism prediction instruments. The authors demonstrate how adherence to the criterion may lead to considerable disparate impact when recidivism prevalence differs across groups.
- Corbett-Davies, S., et al. (2017). Algorithmic decision-making and the cost of fairness. In S. Matwin, S. Yu, & F. Farooq (Eds.), Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 797–806). Association for Computing Machinery. https://doi.org/10.1145/3097983.3098095
- This paper discusses algorithmic fairness as a constrained optimization problem, maximizing model utility while satisfying the criterion of formal fairness. The authors focus on the context of algorithmic decision-making in pretrial release determinations. They show that the optimal unconstrained model treats all defendants equally and compare this to optimal models that are constrained by statistical parity, predictive parity, and conditional statistical parity. They discuss the trade-off in model utility under these constraints. The paper examines data from Broward County, Florida, and discusses the practical tension between optimizing for public safety, which yields models with significant racial disparities and optimizing for fairness, which means releasing higher-risk defendants.
- Corbett-Davies, S., & Goel, S. (2018).* The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv:1808.00023
- This paper argues that three prominent definitions of fairness used in machine learning, anti-classification, classification parity, and calibration, each have significant statistical issues. In contrast to these strategies, the authors argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce.
- Dwork, C., et al. (2012). Fairness through awareness. In S. Goldwasser, Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (pp. 214–226). Association for Computing Machinery. https://doi.org/10.1145/2090236.2090255
- This paper studies fairness in classification and discusses the goal of preventing classifier discrimination against individuals based on membership in a sensitive group while maintaining classifier utility. The framework proposes a metric for individual similarity under a classification task. The paper presents a learning algorithm for maximizing classifier utility under various fairness constraints. The authors adapt this algorithm to a fairness model that guarantees statistical parity. They relate their proposed fairness framework to tools developed for differential privacy.
- Fazelpour, S., & Lipton, Z. C. (2020). Algorithmic fairness from a non-ideal perspective. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 57-63. Association for Computing Machinery. https://doi.org/10.48550/arXiv.2001.09773
- This paper draws a connection between the current approaches to fairness in the machine learning community and the concept of ideal and non-ideal modes of theorizing justice developed in political philosophy. It is discussed that the current literature of fairness in machine learning is mostly taking the ideal methodology, which leads to some known problems with the ideal methodology. Through this perspective, they discuss some challenges of fair machine learning research including the impossibility results.
- Flores, A. W., et al. (2016).* False positives, false negatives, and false analyses: A rejoinder to “Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks”. Federal Probation, 80(2), 38-46.
- This article argues that a ProPublica report exposing racial bias in COMPAS, a risk assessment tool used in the criminal justice system, was based on faulty statistics and data analysis. The authors provide their own analysis of the data used in the ProPublica piece to argue that the COMPAS tool is not racially biased.
- Glymour, B., & Herington, J. (2019). Measuring the biases that matter: The ethical and casual foundations for measures of fairness in algorithms. In Proceedings of the Conference on Fairness, Accountability, and Transparency, 269-278. Association for Computing Machinery. https://doi.org/10.1145/3287560.3287573
- This paper categorizes and describes the fairness measures in the literature by considering the underlying causal structures. All of these possible structures are listed and modeled by causal graphical models over sensitive and insensitive features, algorithm’s prediction of behavior, true behavior, and even possible variables not considered in the model. The paper argues that there are conflicting biases that cannot be jointly minimized. Specifically, it is argued that there is a trade-off between core-relative error bias and procedural bias, and they suggest using non-condition-relative measures as an alternative.
- Hardt, M., et al. (2016).* Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems 29 (pp. 3315–3323).
- This article proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, this paper shows how to optimally adjust any learned predictor to remove discrimination according to the authors’ definition. The authors argue that this framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision-maker, who can respond by improving the classification accuracy.
- Herington, J. (2020). Measuring fairness in an unfair world. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 286-292).
- This paper argues that the three most popular families of measures – unconditional independence, target-conditional independence, and classification-conditional independence – make assumptions that are unsustainable in the context of an unjust world. The paper argues that implicit idealizations in these measures fall apart in the context of historical injustice, ongoing unmodeled oppression, and the permissibility of using sensitive attributes to rectify injustice. The paper puts forward an alternative framework for measuring fairness in the context of existing injustice: distributive fairness.
- Joseph, M., et al. (2016). Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems, 29. Neutral Information Processing Systems. https://doi.org/10.48550/arXiv.1605.07139
- This paper tackles the problem of fair decision making in classic and contextual bandits. In this setting, it is required that an arm with smaller reward is not preferred over an arm with larger reward throughout the learning process. They provide an optimal algorithm for the classic bandit case and a polynomial algorithm for the contextual bandit case.
- Kilbertus, N., et al. (2017).* Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems, 30, 656–666.
- Going beyond observational criteria, this article frames the problem of discrimination based on protected attributes in the language of causal reasoning. Through the lens of causality, this article articulates why and when observational criteria fail, exposes previously ignored subtleties and why they are fundamental to the problem, puts forward natural causal non-discrimination criteria, and develops algorithms that satisfy them.
- Kilbertus, N., et al. (2020). Fair decisions despite imperfect predictions. In International Conference on Artificial Intelligence and Statistics, 108, 277-287. PMLR. https://doi.org/10.48550/arXiv.1902.02979
- The paper tackles the challenge of fair decision-making with selective labels. The challenge arises when the inclusion of cases in the dataset depends on the decision made about them. The author shows that deterministic decision-making based on predictions is suboptimal in this case. As a result, they suggest learning “how to decide” instead of learning “how to predict.”
- Kleinberg, J., et al. (2017). Inherent trade-offs in the fair determination of risk scores. In C.H. Papadimitrou (Ed.), Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017), 1-23. Association for Computing Machinery https://doi.org/10.4230/LIPIcs.ITCS.2017.43
- This paper formalizes three fairness conditions that lie at the heart of recent debates and argues that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. The paper’s results suggest some of the ways key notions of fairness are incompatible with each other and provide a framework for thinking about the trade-offs between them.
- Kleinberg, J., et al. (2018). Algorithmic fairness. In AEA Papers and Proceedings, 108, 22-27. American Economic Association. https://doi.org/10.1257/pandp.20181018
- This article studies how we can make fair decisions using a dataset that includes sensitive demographic attributes such as race. In the considered setting, a predictor function is learnt from the dataset that is used to make binary decisions. The authors show that the consideration of equity should not change the learnt function but should rather change how it is used for decision-making. It is argued that the sensitive feature should be considered in the learning process.
- Kusner, M. J., et al. (2017). Counterfactual fairness. In I. Guyon & U. V. Luxburg (Eds.), Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4069-4079). https://papers.nips.cc/paper/2017/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html
- This paper discusses the criterion of fairness through notation and methods from causal inference. The framework that the authors develop considers the model outcome for each individual who may be correlated with sensitive attributes. They consider a model to be counterfactually fair if the outcome for each individual is not caused by their sensitive attribute. The Total Effect criterion presented in Pearl’s notation for causal inference is a special case of their proposed approach to counterfactual fairness.
- Liu, L. T., et al. (2018). Delayed impact of fair machine learning. In J. Dy & A. Lazaric (Eds.), Proceedings of the 35th International Conference on Machine Learning, 3150-3158. PLMR.
- This article presents a study of how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. The results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.
- Mehrabi, N., et al. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1-35. https://doi.org/10.1145/3457607
- Acknowledging the widespread proliferation of artificial intelligence (AI) systems, this survey identifies different sources of bias within the design and engineering of such systems. In doing so, the authors create a taxonomy of fairness definitions drawn from the broader machine learning literature, spanning various domains and subdomains. Specific examples of unfair outcomes are cataloged alongside current solutions and future directions attempting to mitigate the biases of AI systems.
- Mhasawade, V., et al. (2021). Machine learning and algorithmic fairness in public and population health. Nature Machine Intelligence, 3(8), 659-666. https://doi.org/10.1038/s42256-021-00373-4
- This article highlights potential opportunities for machine learning to be leveraged within a more holistic appraisal of healthcare, moving beyond the context of the hospital or clinic. With a general focus on integrating social determinants of health within the machine learning subfield of algorithmic fairness, the authors podium opportunities for technology and data to be leveraged to achieve public health equity. Current challenges towards achieving these goals – ranging from data privacy to validity – are acknowledged.
- Mitchell, S., et al. (2021). Prediction-based decisions and fairness: A catalogue of choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8, 141-163. https://doi.org/10.1146/annurev-statistics-042720-125902
- This paper explicates the various choices and assumptions made—often implicitly—to justify the use of prediction-based decisions. The paper demonstrates how such choices and assumptions can raise concerns about fairness and presents a notationally consistent catalog of fairness definitions from the ML literature. The paper offers a concise reference for thinking through the choices, assumptions, and fairness considerations of prediction-based decision systems.
- Ntoutsi, E. et al. (2019). Bias in data-driven artificial intelligence systems – An introductory survey. (2020). Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3). https://doi.org/10.1002/widm.1356
- This article advocates for the necessity of moving beyond traditional algorithms optimized for predictive performance and towards embedding ethical and legal principles in their design. With a focus on artificial intelligence systems driven by Big Data, this survey provides a multidisciplinary overview of the technical challenge of developing unbiased AI and how inattentiveness can yield prejudiced decision making based on demographic attributes. New research directions predicated on a legal framework are platformed.
- Pearl, J. (1993). Graphical models, causality, and intervention. Statistical Science, 8, 266–269.
- This paper provides an early connection between Directed Acyclic Graphical models and causality. The paper gives a bias free estimation of causal effects and introduces the back-door criterion for reasoning about the confounding relationships in graphical models.
- Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge University Press.
- This book details a framework for reasoning about causal models. This work consolidates various theoretical results into a rigorous mathematical treatment, providing the foundation for later developments in the field of causal reasoning.
- Pleiss, G., et al. (2017). On fairness and calibration. In Advances in Neural Information Processing Systems 30 (pp. 5680–5689).
- This paper investigates the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. The authors show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These findings, which extend and generalize existing results, are empirically confirmed on several datasets.
- Rzepka, R., & Araki, K. (2005). What statistics could do for ethics? The idea of common sense processing based safety valve. In AAAI Fall Symposium on Machine Ethics, Technical Report FS-05-06, 85-87.
- This paper introduces an approach to the ethical issue of machine intelligence developed through experiments with automatic common-sense retrieval and affective computing for open-domain talking systems. The authors use automatic common-sense knowledge retrieval, which allows the calculation of common consequences of actions and the average emotional load of those consequences.
- Schuilenburg, M., & Peeters, R. (Eds.). (2020). The algorithmic society: Technology, power, and knowledge. Routledge.
- This anthology brings together scholars from the fields of public administration, criminal justice, and urban governance to critically address algorithmic decision as a social concept. Part One examines algorithmic governance and machine learning as an administrative tool; Part Two, predictive policing and automated decision making in a legal setting; and Part Three, artificial intelligence as a ubiquitous technology within the smart city. Prescient critical and ethical questions regarding our algorithmic society are posed throughout.
- Warner, R., & Sloan, R. H. (2021). Making artificial intelligence transparent: Fairness and the problem of proxy variables. Criminal Justice Ethics, 40(1), 23-39. https://doi.org/10.1080/0731129X.2021.1893932
- This article argues that artificial intelligence systems must be transparent to enable the effective regulation and assured fairness of these technologies. Adopting the definition provided by computer science, an explainable AI system is one that provides a human-understandable justification of any decision or prediction made. The concept of explainability is paired with a quantifiable variable for regulatory transparency (r-transparent) to propose four core requirements for just AI systems.
- Zemel, R., et al. (2013). Learning fair representations. Proceedings of the 30th International Conference on Machine Learning, 28(3), 325-333.
- This paper proposes a learning algorithm for classification subject to a group and individual fairness criteria. They formulate the problem as an optimization of two competing goals to encode the data, while simultaneously obfuscating information about individual membership in protected groups.
- Zhang, J., & Bareinboim, E. (2018). Fairness in decision-making—The causal explanation formula. In Thirty-Second AAAI Conference on Artificial Intelligence. AAAI Publications.
- This paper introduces three new fine-grained measures of transmission of change from stimulus to effect, which we called counterfactual direct (Ctf-DE), indirect (Ctf-IE), and spurious (Ctf-SE) effects. The authors apply these measures to various discrimination analysis tasks and run extensive simulations, including detection, evaluation, and optimization of decision-making under fairness constraints. The paper concludes by studying the trade-off between different types of fairness criteria (outcome and procedural) and provides a quantitative approach to policy implementation and the design of fair AI systems.
Chapter 27. Automating Origination: Perspectives from the Humanities (Avery Slater)
- Andersson, A. E. (2009). Economics of creativity. In C. Karlsson, P. Cheshire, & A. E. Andersson (Eds.), New directions in regional economic development (pp. 79-95). Springer.
- This paper explores the past effects of the division of labor system as posited by Adam Smith and the recent rise in creativity that goes against this system. The author argues that as specialization progressed, people were confined to a few very simple operations, and this should have limited creativity. However, in recent times there has been a growth in creative industries such as research and development, scientific research, and the arts.
- Ariza, C. (2009). The interrogator as critic: The Turing test and the evaluation of generative music systems. Computer Music Journal, 33(2), 48-70.
- This article explores the relationship between algorithmically generated music systems and the human ability to detect their generated nature. The author argues that listening tests to detect this distinction do not constitute true Turing Tests.
- Basalla, M., et al. (2022). Creativity of deep learning: Conceptualization and assessment. In Proceedings of the 14th International Conference on Agents and Artificial Intelligence.
- This paper leverages insights from computational creativity to conceptualize and assess applications of deep learning in the creative domains. The authors argue that the creativity of deep learning is limited due to a variety of reasons, including the confinement of their conceptual space defined by training data, lack of flexibility for changes in the internal problem representation, and the lack of capability to identify connections across different domains.
- Boden, M. A. (1990).* The creative mind: Myths and mechanisms. Abacus & Basic Books.
- This book explores human creativity and presents a scientific framework for understanding how creativity arose and how it is defined.
- Boden, M. (Ed.). (1994).* Dimensions of Creativity. M.I.T. Press.
- The authors explore how creative ideas arise, and whether creativity can be objectively defined and measured.
- Cardoso, A., & Bento, C. (Eds.). (2006).* Computational creativity [Special issue]. Journal of Knowledge-Based Systems, 19(7).This special issue focuses on characterizing and establishing computational models of creativity. The papers encompass four topics: models of creativity, analogy and metaphor in creative systems, multiagent systems, and formal approaches to creativity.
- Carnovalini, F., & Rod? , A. (2020). Computational creativity and music generation systems: An introduction to the state of the art. Frontiers in Artificial Intelligence, 3(14). https://doi.org/10.3389/frai.2020.00014
- This article surveys the landscape of Music Generation, a subfield of computational creativity that focuses on algorithmically produced music. Providing a substantial introduction to the topic, the authors outline creativity in computational and human terms and review past challenges surrounding music generation systems. They provide current research on improvements to these challenges and suggest future possibilities.
- Clancey, W. J. (1997). Situated cognition: On human knowledge and computer representations. Cambridge University Press.
- This book explores and explains the new ‘situated cognition’ movement in cognitive science. This is a new metaphysics of mind; a dynamical-systems-based, ecologically oriented model of the mind. The author suggests that a full understanding of the mind will require systematic study of the dynamics of interaction among mind, body, and world.
- Colton, S., & Wiggins, G. A. (2012). Computational creativity: The final frontier? Ecai, 12, 21-26. https://doi.org/10.3233/978-1-61499-098-7-21
- This paper argues Computational Creativity constitutes a frontier for AI research beyond all others. The authors do so through an exploration of the field of computational creativity via a working definition; a brief history of seminal work; an exploration of the main issues, technologies, and ideas; and a look towards future directions.
- Csikszentmihalyi, M. (1988). Motivation and creativity: Toward a synthesis of structural and energistic approaches to cognition. New Ideas in Psychology, 6(2), 159-176.
- This paper argues against the idea that the ability of automated computer systems to display creative abilities when solving problems (such as discovering scientific laws) means that human creativity must also share similar computational processes as the creative computer systems. The author identifies this claim as based on a misunderstanding of what creativity is, the conditions under which the human creative process happens, and a confusion of rationality with the complex human thought process. While the contemporary AI systems look very different from the ones that were available when this paper was written, the arguments in this paper are still relevant when considering today’s learning driven approaches.
- Delacroix, S. (2021). Computing machinery, surprise and originality. Philosophy & Technology, 34(4), 1195-1211.
- This paper examines the distinction between the ability of machines to “originate” something (something Lady Lovelace claimed would be impossible) and “surprise” human beings (Turing’s translation of Lady Lovelace’s claim regarding origination). The author argues that only a portion of the cases where machines can surprise humans are the product of originality endeavours, and much more is needed for genuine origination – acts of creation that make humans question their understanding of themselves and the world, as well as possess qualities that can be interpreted by others that are immersed in a socio-cultural environment. Hence, the author translates Lady Lovelace’s insight into originality by flipping Turing’s translation – by posing whether it’s possible to build machines that can possibly be surprised by humans.
- Dodgson, M., et al. (2005). Think, play, do: Technology, innovation, and organization. Oxford University Press.
- The authors argue that the innovation process is changing profoundly, partly due to innovation technologies. In response, the authors propose a new schema for the innovation process: Think, Play, Do.
- Edwards, S. M. (2001). The technology paradox: Efficiency versus creativity. Creativity Research Journal, 13(2), 221-228.
- This article highlights the impact of technology on the ability of individuals to be creative within society. First, the authors review barriers that individuals must overcome to function creatively in the information age, along with the process by which creativity occurs. These factors are then presented alongside the consequences of technological and computational development. Finally, the authors offer suggestions on the coexistence of creativity and technology in the future.
- Gizzi, E., et al. (2020). From computational creativity to creative problem solving agents. International Conference on Computational Creativity (ICCC).
- This article introduces creative problem solving (CPS) as a skill for AI that builds on computational creativity. In defining CPS, the authors adopt an interdisciplinary model using problem-solving concepts from AI and aspects of computational creativity.
- Grace, K., & Maher, M.L. (2019). Expectation-based models of novelty for evaluating computational creativity. Computational Creativity.
- This chapter argues that to measure novelty, instead of measuring the difference between artifacts, we should focus on the violation of observers’ expectations. The authors then articulate the reasons for proposing such a shift of focus.
- Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A human–machine communication research agenda. New Media & Society, 22(1), 70-86. https://doi.org/10.1177/1461444819858691
- This paper addresses the gap between communication theory and AI. With new and growing interactions between humans and technologies, communication theory faces the challenge of understanding these relations that do not fit into existing paradigms. This paper discusses these challenges through a human-machine communication (HMC) framework, focusing on the functional, relational, and metaphysical aspects of AI.
- Jordanous, A. (2012). A standardised procedure for evaluating creative systems: Computational creativity evaluation based on what it is to be creative. Cognitive Computation, 4(3), 246-279.
- The authors address the issue of defining what it means for a computer to be creative; given that there is no consensus on this for human creativity, its computational equivalent is equally nebulous. Thus, this paper proposes a Standardised Procedure for Evaluating Creative Systems (SPECS) to measure and define computational creativity. SPECS methodology is then demonstrated through a comparative case study evaluating computational creativity systems that improvise music.
- Kantosalo, A., & Jordanous, A. (2020). Role-based perceptions of computer participants in human-computer co-creativity [Paper presentation]. 7th Computational Creativity Symposium at AISB 2020, London, UK. https://kar.kent.ac.uk/id/eprint/80484
- This paper explores the place of the computer in creative collaborations between humans and computers, and the past definitions of these positions. In looking at both the positive and negative aspects of these roles, the authors seek to understand the potential for computers in human-computer co-creativity. Through analysis and a comparative review, the authors consider both the current roles of co-creative computer systems and future possibilities.
- Langley, P., et al. (Eds.) (1986).* Scientific discovery: Computational explorations of the creative process. MIT Press.
- The authors examine the nature of scientific research and review the arguments for and against a normative theory of discovery. The authors use a series of artificial intelligence programs they developed that can simulate the human thought processes used to discover scientific laws.
- Lehman, J., et al. (2020). The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities. Artificial Life, 26(2), 274-306.
- This paper is a collection of anecdotes that show the surprising creativity of artificial agents in artificial life and evolutionary computation research. Those anecdotes include artificial agents creatively subverting their expectations or intentions, producing adaptations and behaviors that are unexpected and creative.
- Maher, M. L. (2012, May). Computational and collective creativity: Who’s being creative? Proceedings of the Third International Conference on Computational Creativity, (pp. 67–71).
- This paper includes an intriguing computational creativity framework whereby creativity is not only ascribed to individual humans or machines, but also collectives of people and agents and/or their interaction. This perspective enables the authors to study how the “ideation” (generation, synthesis and implementation of ideas) and “interaction” (intercommunication of different agents) components of the creative process works for systems involving multiple humans and/or machine agents. The multi-agent framework outlines several future avenues of research in computational creativity to further the fields’ understanding of the theoretical and practical aspects of the creative process.
- McCorduck, P. (1991).* Aaron’s Code: Meta-art, artificial intelligence, and the work of Harold Cohen. W. H. Freeman and Company.
- This book examines the connection between art and computer technology. The author explores the work of the artist Harold Cohen, who created an elaborate computer program that makes drawings autonomously, without human intervention.
- Montal, T., & Reich, Z. (2017). I, robot. You, journalist. Who is the author? Authorship, bylines and full disclosure in automated journalism. Digital Journalism, 5(7), 829-849.
- This paper explores the increasing reliance on algorithms to generate news automatically, particularly in the form of algorithmic authorship. The use of this technology has potential psychological, legal and occupational implications for news organizations, journalists, and their audiences. The authors argue for a consistent and comprehensive crediting policy that sponsors public interest in automated news.
- Moruzzi, C. (2021). Measuring creativity: An account of natural and artificial creativity. European Journal for Philosophy of Science, 11. https://doi.org/10.1007/s13194-020-00313-w
- This paper addresses a gap in current discussions about creativity: how creativity should be measured. The author provides a model of creativity that is not anthropocentric in nature, opening it up to possibilities in exploring non-human and artificial creativity. This framework focuses on internal features of creativity, mainly problem-solving, evaluation, and naivety.
- Partridge, D., & Rowe, J. (1994).* Computers and creativity. Intellect Books.
- Through a computational modelling perspective, this book examines theories and models of the creative process in humans. The authors explore both input creativity (the analytic interpretation of input information) and output creativity (the artistic, synthetic process of generating novel innovations).
- Paul, E. S., & Scott, B. K. (Eds.). (2014).* The philosophy of creativity: New essays. Oxford University Press.
- The authors argue that creativity should be explored in connection to, and in the context of, philosophy. Their aim is to illustrate the value of interdisciplinary exchange and explore issues such as the role of consciousness in the creative process, whether great works of literature give us insight into human nature, whether a computer program can really be creative, and the definition of creativity.
- Pérez y Pérez, R., & Ackerman, M. (2020).* Towards a methodology for field work in computational creativity. New Generation Computing, 34(4), 713-737. https://doi.org/10.1007/s00354-020-00105-z
- This paper focuses on fieldwork in computational creativity and provides a methodology for this work. The authors look at fieldwork in terms of what it means to make a creative computer system highly accessible and the influence these systems can have when interacting with society. Reflecting on their experience of making their systems ALYSIA and MEXICA widely available, the authors propose a flexible five-step methodology with the hopes that it can be broadly tested throughout the computational creativity community.
- Ramesh, A., et al. (2022). Hierarchical text-conditional image generation with CLIP latents. Unpublished manuscript.
- This manuscript examines Dalle2, an AI system that can create original images and art from text descriptions. The authors also demonstrate Dalle2’s ability to combine concepts, attributes, and styles to create novel images and make realistic edits or create variations to existing images. The authors hope Dalle2 will empower people for creative expression. The release of this work on April 6th 2022 caused intense public discussions on the originality and creativity of AI systems. https://openai.com/dall-e-2/
- Ritchie, G. (2019).* The evaluation of creative systems. In T. Veale & F. A. Cardoso (Eds.), Computational Creativity: The philosophy and engineering of autonomously creative systems (pp. 159-194). Springer. https://doi.org/10.1007/978-3-319-43610-4_8
- This chapter looks at methods for the evaluation and assessment of computational creativity and creative systems. The author highlights how there is a lack of standard methodology for assessment, and it questions both what creative properties should be focused on and how these properties should be measured.
- Sarkar, A., & Cooper, S. (2020).* Towards game design via creative machine learning (GDMCL). 2020 IEEE Conference on Computational Intelligence and Games, CIG, 744-751. https://doi.org/10.1109/CoG47356.2020.9231927
- This article questions the lack of creative tasks assigned to machine learning systems in game design, despite the emergence of this practice in other areas. Adopting creative methods for machine learning in visual art and music, the authors argue for similar approaches in game design and reinvent these techniques as a whole as Game Design via Creative Machine Learning.
- Röder, S. (2018). Rethinking creativity. XRDS: Crossroads, The ACM Magazine for Students, 24(3), 54-59.
- This paper argues that there’s a structure in the creative process that can be modelled, learned, and taught – an insight that forms the underpinning of the field of computational creativity. The author argues that recent AI systems (such as those that can do image style transfer) already automate specific skills that are part of the creativity pipeline and speculates that this can create a synergy between humans and machines to augment and streamline the process of creativity. The author also shuns the question of whether computers can be creative as a philosophical one instead of a scientific one and takes a practical stance when thinking about computational creativity.
- Schmidhuber, J. (1997).* Low-complexity art. Leonardo, Journal of the International Society for the Arts, Sciences, and Technology, 30(2), 97-103.
- This article relates and explores the relation between the depiction of the general essence of objects; viewed as the computer-age equivalent of minimal art to informal notions such as “good artistic style” and “beauty.” In an attempt to formalize certain aspects of depicting the essence of objects, the author proposes and analyses this art form they refer to as low-complexity art.
- Schmidhuber, J. (2010). Artificial scientists & artists based on the formal theory of creativity. In Proceedings of the 3rd Conference on Artificial General Intelligence. Advances in Intelligent Systems Research (Vol. 10, pp. 145-150).
- This paper argues that a “learning to compress” model of creativity explains many aspects of intelligence. Specifically, the author argues that a learning problem where the discovery of novel patterns that improve data compression/prediction is rewarded can give rise to intelligent behaviour, such as the creation of art, music, and humor. The author argues that the computational resources imposed on the learning agents are essential in this framework and proceed to operationalize some of the artificial creativity framework proposed in the paper.
- Soros, L. B., & Stanley, K. O. (2016) Is evolution fundamentally creative? Artificial Life Workshops.
- This paper explores the relationships between open-ended evolution (OEE) in the field of artificial life and the theory of computational creativity. The authors propose the insight that the characteristics of OEE align well with the concepts of exploratory and transformational creativity.
- Sözbilir, F. (2018). The interaction between social capital, creativity and efficiency in organizations. Thinking Skills and Creativity, 27(1), 92-100. https://doi.org/10.1016/j.tsc.2017.12.006
- This paper discusses the factor of social capital in organizations in relation to creativity and efficiency. The author uses participants in a public Turkish organization and concludes that social capital has a positive impact on both creativity and efficiency. They further find that there is a positive link between creativity and efficiency.
- Sternberg, R. J., & Lubart, T. I. (1995). Defying the crowd: Cultivating creativity in a culture of conformity. Free Press.
- This book examines how institutions in business and education often impede the creative process and how the creative person typically finds ways to subvert those institutions to promote their ideas. Furthermore, by presenting a theory as to how institutions can learn to foster creativity, the authors explore how people can learn to become more creative.
- Varshney, L. R., et al. (2013). Cognition as a part of computational creativity. In IEEE 12th International Conference on Cognitive Informatics and Cognitive Computing (pp. 36-43).
- This paper examines the relationship between two distinct fields that have developed in a parallel fashion: computational creativity and cognitive computing. The authors then argue that concluding that the two fields overlap in one precise way: the evaluation or assessment of artifacts with respect to creativity.
- Veale, A., et al. (2006).* Computational creativity [Special Issue]. New Generation Computing, 24(3).
- A pure definition of creativity—pure, at least, in the sense of being metaphor-free and grounded in objective fact—presents as an elusive phenomenon to study, made more vexing by a fundamental inability to describe it in formal terms. In this special issue, the contributing authors present their respective definitions of creativity.
- Wiggins, G. A. (2006).* A preliminary framework for description, analysis and comparison of creative systems. Journal of Knowledge Based Systems, 19(7), 449-58.
- This article summarizes and explores concepts presented in and arising from Margaret Boden’s (1990) descriptive hierarchy of creativity. By formalizing the ideas Boden proposes, the author argues that Boden’s framework is more uniform and more powerful than it first appears. Finally, the author explores potential routes to achieve a model which allows detailed comparison, and hence better understanding, of systems that exhibit behavior that would be called ‘‘creative’’ in humans.
- Wyse, L.L. (2019). Mechanisms of artistic creativity in deep learning neural networks. International Conference on Computational Creativity.
- This paper examines five behavioral characteristics associated with creativity that are also observed in generative deep learning: transformative perception, synthesis of different domains, sentiment recognition and synthesis, analogical and metaphorical reasoning, and abstraction. The author then explores mechanisms in deep learning that give rise to those five creative characteristics.
Chapter 28. Perspectives on Ethics of AI: Philosophy (David J. Gunkel)
- Ananny, M. (2016). Towards an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, and Human Values 41(1), 93-117.
- This paper suggests that understanding and regulating algorithms requires an interrogation of their relationships to existing ethical frameworks. To that end, the author develops a definition of networked information algorithms (NIAs), highlighting the combinations of institutionally situated code, practices, and norms that determine algorithms and their role in creating, sustaining, and signifying relationships among people and data. The article then examines the ethical dimensions of these NIAs, looking at their power to organize, shape, and determine the time frames of ethical action.
- Anderson, M., & Anderson, S. L. (Eds.).* (2011). Machine ethics. Cambridge University Press.
- This collection of essays by philosophers and artificial intelligence researchers focuses on ways to enable machines to function in an ethically responsible manner, both within their interactions with humans and, as machines evolve, when engaging in their own decision making. These essays discuss how machines that function autonomously might be accorded ethical capacity, and whether ethically directed enhancements to machines are advisable or necessary.
- Bloom, P. (2020). Identity, institutions, and governance in an AI world: Transhuman relations. Palgrave Macmillan.
- This book argues that the proliferation of interactions with artificial intelligence calls for a rethinking of social relations more generally. It proposes an original theory of ‘trans-human relations’ for understanding the way advances in robotics, computing, and digital communications are transforming our daily lives. To do so, the book draws insights from a variety of fields, bringing developments in computer programming into conversation with social justice literatures and calls to protect the rights and views of all forms of consciousness. The book closes by suggesting a culture of ‘mutual intelligent design,’ composed of structures and practices capable of accommodating a world in which non-human intelligence and cyborgs are increasingly central.
- Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. Ramsey (Eds.), The Cambridge handbook of artificial intelligence (pp. 316–334). Cambridge University Press.
- This chapter surveys some of the ethical challenges that may arise from the creation of thinking machines that could potentially harm humans and other morally relevant beings. Addressing how to assess whether, and in what circumstances, thinking machines might have moral status, the authors ask how humans might ensure that machines of advanced intelligence are operated safely, and towards purposes that benefit society.
- Buckner, C. (2018). Empiricism without magic: Transformational abstraction in deep convolutional neural networks. Synthese, 195, 5339–5372. https://doi.org/10.1007/s11229-018-01949-1
- Addressing Deep Convolutional Neural Networks (DCNNs), this paper considers recent engineering advances in the context of philosophy of mind. The author argues that DCNNs are successful across a number of domains because they model a distinctive kind of abstraction from experience. On the philosophical side, this engineering achievement vindicates some themes from classical empiricism about reasoning. It supports the empiricist idea that information abstracted from experience enables high-level reasoning in strategy games like chess and Go.
- Christian, B. (2020). The alignment problem: Machine learning and human values. New York: W.W. Norton & Company.
- This book explores the dissonance between vision and reality in algorithm design and development. In particular, it calls attention to the so-called “alignment problem,” instances in which algorithmic systems do not operate as intended or produce unexpected (and potentially dangerous) externalities. The author unpacks this problem through exploratory cases, drawing attention to the ‘first responders’ working to address issues of alignment in a world where algorithms are increasingly trusted to operate with very little human oversight.
- Clarke-Doane, J., & Baras, D. (2021). Modal security. Philosophy and Phenomenological Research, 102(1), 162–183. https://doi.org/10.1111/phpr.12643
- In a discussion that is relevant to a learning machine’s eventual capacity to approximate self-interruption, hesitation or “doubt” regarding its own conclusions, this paper critically addresses six objections to the principle of Modal Security. Supposing that a belief “that P” is initially justified, the author asks, how can new evidence defeat that justification? One way is simple: by being evidence that P is false (rebutting evidence). However, epistemologists commonly recognize a second type of “undercutting” or “undermining” defeat. How could evidence defeat the justification of the belief “that P” without being evidence that P is false? Intuitively, it must show that there is some important epistemic feature that the belief “that P” lacks, such as the absence of a clear explanation for a causal relationship.
- Coeckelbergh, M. (2012).* Growing moral relations: Critique of moral status ascription. Palgrave Macmillan.
- Considering the fundamental question of moral status ascription, i.e., who or what is morally significant, this book confronts the insufficiency of the properties approach. The properties approach draws hard lines between what does, or does not, possess certain properties (e.g., rationality, speech, sentience) that qualify an entity for moral status. Recognizing a current paradigm shift in moral thinking, the author presents an original philosophical approach to a relational, phenomenological, and transcendental reconsideration of moral status that observes how moral status is not a fixed state of being as much as a status that actively comes to be within ongoing interactions between entities.
- Collier, J. (2008). Simulating autonomous anticipation: The importance of Dubois’ conjecture. BioSystems, 91, 346–354. https://doi.org/10.1016/j.biosystems.2007.05.011
- Drawing from philosophy of biology, this paper presents the temporal category of anticipation that is shared by both autonomous and living systems. Anticipation allows a system to adapt to external or internal conditions that have not yet materialized. Stating that autonomous systems self-regulate to increase their functionality, and living systems self-regulate to increase their own viability, the author asserts that increasingly strong conditions of anticipation, autonomy, and viability can offer insight into progressively stronger classes of autonomy. Such insight, the author argues, could have consequences for the accurate simulation of living systems.
- Conitzer, V. (2016). Philosophy in the face of artificial intelligence. arXiv:1605.06048v1 [cs.AI]
- This paper defends the use of a philosophical lens in the field of artificial intelligence. Recognizing that AI research labs at universities are typically housed in computer science, not philosophy, departments, and that most of the technical progress on AI is reported at scientific conferences, this paper asserts that the philosophical lens is useful to ground the study of AI in the context of interaction between humans and machine intelligence. Philosophy also offers a qualitatively alternative timeframe to the AI ethics question. Instead of perpetually working to fix observed errors post-hoc, a modal approach enables anticipatory questions of what benefits and harms could be possible, and what would be necessary to achieve AI’s optimal role and in the human setting.
- Criado, J. I., & Gil-Garcia, J. R. (2019). Creating public value through smart technologies and strategies: From digital services to artificial intelligence and beyond. International Journal of Public Sector Management, 32(5), 438-450.
- This paper explores the generation of public value through smart city strategies and urban technologies. It argues that social and collaborative technologies have the potential to transform public administrations and cultivate the co-creation of public services and management processes. The authors track changes in public value generation over time, connecting these changes to different public management paradigms and technological innovations. They conclude that open innovation processes could become an important part of transformative public sector practices moving forward.
- Dennett, D. C. (2017).* Brainstorms: Philosophical essays on mind and psychology. MIT Press.
- In a collection of essays that approach points of intersection between fields of philosophy of mind, cognitive psychology, and artificial intelligence, the author weaves an interdisciplinary set of approaches addressing abstraction, concreteness, and practical solution application. Within investigations that illustrate how each of these three fields is enriched by the other two, the author examines how assumptions regarding consciousness might obscure insightfully rich similarities between human and artificial intelligences.
- Edwards, L., & Veale, M. (2018). Enslaving the algorithm: From a ‘right to an explanation’ to a ‘right to better decisions’? IEEE Security & Privacy 16(3), 46-54.
- This article explores the growing legal support for a “right to an explanation” in the context of machine learning systems and algorithmic decision-making. Though such individualized rights can be useful, this article argues that they place an unreasonable burden on the average data subject. Even though meaningful explanations of algorithmic logics are possible, then, the authors suggest that other forms of governance, such as impact assessments, ‘soft law,’ judicial review, and model repositories may be more useful in enabling users to control algorithmic system design.
- Ganascia, J-G. (2010). Epistemology of AI revisited in the light of the philosophy of information. Knowledge Technology & Policy, 23, 57–73. https://doi.org/10.1007/s12130-010-9101-0
- This paper considers the epistemology of artificial intelligence in light of the opposition between the “sciences of nature” and the “sciences of culture,” as introduced by German neo-Kantian philosophers. The author demonstrates how this epistemological view illuminates many contemporary applications of artificial intelligence. This paper situates these perspectives in the context of philosophy of information, emphasizing the role played in artificial intelligence by the notions of context and abstraction level.
- Gunkel, D. J. (2012).* The machine question: Critical perspectives on AI, robots, and ethics. MIT Press.
- The Machine Question is an investigation into the assignment of moral responsibilities and rights to intelligent and autonomous machines of our own making. This book takes up the “machine question”: whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration.
- Hall, J. S. (2001, July 5). Ethics for machines. KurzweilAI.net. http://www.kurzweilai.net/ethics-for-machines
- Offering a brief historical and topological survey of the ethics field, nanotechnologist J. Storrs Hall considers human moral duties to machines and machine moral duties to humans. The author states that these questions are of current significance due to the possibility of creating an advanced intelligence that could exceed some, or even many, human capabilities. The author approaches the related hypothesis that more advanced machines could also, in theory, be “superethical,” or more ethical, than their human interlocutors.
- Hooker, J., & Kim, T. W. (2019). Truly autonomous machines are ethical. The AI Magazine, 40(4), 66–73. https://doi.org/10.1609/aimag.v40i4.2863
- Developing upon conceptions of autonomy drawn from philosophical literature, this article questions whether AI ethics should be conceived of solely in terms of external constraint. Approaching ethics alternatively as an internal constraint on autonomy, this article provides a counterargument to conventional warnings against machine intelligences being granted powers of independent choice. Acknowledging that giving machines unchecked independence is a source of risk, the authors discuss approaches from philosophy that distinguish autonomy as a necessary and desirable capacity for ethical choice. Within a setting of internally governed independence, the article suggests that a truly autonomous machine must also be an ethical machine.
- Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204. https://doi.org/10.1007/s10676-006-9111-5
- Identifying the five conditions of the traditional account of moral agency known to contemporary action theory, this paper states that computer system behavior meets four of the five conditions. To that end, this paper argues that computer systems are not moral agents but do act as moral entities in a way that distinguishes them from natural objects that act only from necessity. As such, this paper addresses an alternative category of moral status for computer systems.
- Konig, P. D. (2019). Dissecting the algorithmic leviathan: On the socio-political anatomy of algorithmic governance. Philosophy and Technology, 33(1), 467-485.
- This article engages with the popular suggestion that algorithmic governance represents a new and distinct type of societal steering and control. Despite the novel ways in which algorithms shape and manage social complexity, the author suggests that algorithmic governance can nevertheless be understood through the lens of a more traditional figure in political philosophy, namely Thomas Hobbes’ Leviathan. Doing so highlights the connections between contemporary algorithmic governance and the apolitical traits of the Leviathan, which eliminates the political by requiring compliance and forgoing contestation, all for the sake of producing stabilizing outcomes.
- Lin, P., et al. (Eds.). (2012).* Robot ethics: The ethical and social implications of robotics. MIT Press.
- This book collects 22 chapters contributed by noted researchers and theorists across a number of disciplines. Considering aspects of robots in social usage, the chapters are categorized into six thematic sections: Design and Programming, Military, Law, Psychology and Sex, Medicine and Care, and Rights and Ethics. As the function of this volume is to initiate dialogue rather than impute already-decided dogma, the authors focus upon the complex questions that both real and hypothetical settings will generate.
- Lin, P., et al. (Eds.). (2017).* Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.
- This edited volume, aimed at academic audiences, policymakers, and the broader public, presents a global and interdisciplinary collection of essays that focuses on emerging issues in the interdisciplinary field of “robot ethics.” This field studies the effects of robotics on ethics, law, and policy. Organized into four parts, the first concerns moral and legal responsibility and questions that arise in programming under moral uncertainty. The second part addresses anthropomorphizing design and related issues of trust and deception within human-robot interactions. A third section concerns applications ranging from love to war. The fourth section speculates upon the possible implications and dangers of artificial beings that exhibit superhuman mental capacities.
- Mittelstadt, B., et al. (2019). Explaining explanations in AI. In FAT ’19: Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 279–288). Association for Computing Machinery. https://doi.org/10.1145/3287560.3287574
- This paper approaches the simplified models that are built to approximate and predict what decisions will be made by a complex system. The authors focus upon the distinctions between these models, and explanations offered in fields of philosophy, law, cognitive science, and the social sciences, arguing that the simplified approximations of complex decision-making functions are generally more like scientific models than the types of “everyday” explanations encountered in fields such as philosophy or cognitive science. If this comparison holds, the authors claim, this could result in locally reliable but globally misleading explanations of model functionality. The authors recommend explanations that fit three criteria of accessibility: explanations must be contrastive, selective, and socially interactive.
- Mittelstadt, B., et al. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 1(1), 1-21.
- This article aims to review and organize existing literatures regarding the ethical importance of algorithms in mediating social processes, business transactions, and governmental decisions. The authors provide a prescriptive map of ongoing debates surrounding algorithmic mediation. They diagnose the limitations of this literature and conclude by recommending future trajectories for ongoing research into the ethics of algorithms.
- Reeves, B., & Nass, C. I. (1996).* The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press.
- The Media Equation presents the results of numerous psychological studies that have led to the conclusion that people treat computers, TV, and new media as real people and places. One of the conclusions of these studies is that the human brain has not evolved quickly enough to assimilate 20th-century technology. This book details how this knowledge can help us better design and evaluate media technologies, including computer and Internet software, TV entertainment, news, and advertising, and multi-media.
- Scalable Cooperation at MIT Media Lab. (n.d.). Moral machine. http://moralmachine.mit.edu
- Moral Machine is a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. It shows the user moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, the user judges which outcome they think is more acceptable. The user can then see how his/her responses compare with those of other people.
- Schmidhuber, J. (2009). Ultimate cognition ? la Gödel. Cognitive Computation, 1, 177–193. https://doi.org/10.1007/s12559-009-9014-y
- This paper describes an agent-controlling program that speaks about itself and is able and ready to rewrite itself in an arbitrary fashion once it has found proof that this self-rewrite is useful (according to a user-defined utility function). Discussing how the first 50 years of attempts at ‘‘general AI’’ and ‘‘general cognitive computation’’ were dominated by heuristic approaches, the author distinguishes heuristic and theorem-based approaches, the latter having more recently brought about the first mathematically sound, asymptotically optimal, universal problem solvers. In this setting, this paper examines how to overcome potential problems with self-reference and how to deal with the potentially delicate online generation of proofs that both talk about and affect the currently running proof generator itself.
- Searle, J. R. (1984).* Minds, brains, and science. Harvard University Press.
- Within the setting of the literature of philosophy of mind, the author asserts that the traditional intuitive view of humans as conscious, free, rational agents does not contradict a universe that science conveys in terms of “mindless physical particles.” Rather, the truths of common sense and the truths of science need not be artificially divided. Rejecting the illusion of their irreconcilability can, the author asserts, have notable implications for how artificial intelligences and machine collaborators are conceived of and created.
- Turner, J. (2018).* Robot rules: Regulating artificial intelligence. Springer.
- Robot Rules argues that AI is unlike any other previous technology, owing to its ability to take decisions independently and unpredictably. This gives rise to three issues: responsibility—who is liable if AI causes harm; rights—the disputed moral and pragmatic grounds for granting AI legal personality; and the ethics surrounding the decision-making of AI. The book suggests that in order to address these questions we need to develop new institutions and regulations on a cross-industry and international level.
- Tzafestas, S. G. (2016).* Roboethics: A navigating overview. Springer.
- Per the title’s claim of a navigating overview, this book offers what the author calls “a spherical picture” of the evolving field of roboethics. Initial chapters of this book outline fundamental concepts and theories of ethics and applied ethics alongside fundamental concepts in the field of artificial intelligence. Presenting a robot typology organized according to kinematic structure and locomotion, and then upon the artificial intelligence tools that give intelligence capabilities to robots, the book proceeds to chapters addressing robot applications (e.g., in medicine, society, space, and the military) for which ethical issues must be addressed. A latter chapter provides a conceptual study of the “brain-like’’ capabilities of “mental robots,” discussing the features of more specialized processes of learning and attention.
- University of Oxford Podcasts. (n.d.). Ethics in AI. https://podcasts.ox.ac.uk/series/ethics-ai
- This set of 24 podcasts recorded between November 2019 and December 2020 as the “Oxford University Institute for Ethics ‘Ethics in AI’ seminars” seeks to open a broad, interdisciplinary conversation between the University’s researchers and students in several interrelated disciplines, including Philosophy, Computer Science, Engineering, Social Science, and Medicine. Topics include privacy, information security, appropriate rules of automated behavior, algorithmic bias, transparency, and the potential wider threats on society that AI could present.
- Wallach, W., & Allen, C. (2009).* Moral machines: Teaching robots right from wrong. Oxford University Press.
- The project of developing an artificial moral agent offers an extraordinary lens upon human moral decision-making. Approaching both distinctions and integrations of top-down and bottom-up design approaches, the authors acknowledge that the context involved in real-time moral decisions, as well as the complex intuitions people have about right and wrong, make the prospect of reducing ethics to a logically consistent principle or set of programmable laws at best suspect, and at worst irrelevant. However, the authors state, the project of developing an artificial moral agent offers opportunities for experimentation and questioning of various integrations of top-down and bottom-up approaches that comprise moral decision making.
- Wallach, W., & Asaro, P. (Eds.). (2017).* Machine ethics and robot ethics. Routledge.
- This book addresses the ethical challenges posed by the rapid development and widespread use in everyday life of advancing technologies such as artificial intelligence, robotics, and machine learning. It is a collection of essays that focus on the control and governance of computational systems; the exploration of ethical and moral theories using software and robots as laboratories or simulations; the inquiry into the necessary requirements for moral agency and the basis and boundaries of rights; and questions of how best to design systems that are both useful and morally sound. Collectively, the essays ask what the practical ethical and legal issues, arising from the development of robots, will be over the next twenty years and how best to address these future considerations.
- Wellner, G., & Rothman, T. (2020). Feminist AI: Can we expect our AI systems to become feminist? Philosophy & Technology, 33(1), 191-205.
- This article engages the gender and racial biases that exist in some AI algorithms. Drawing on feminist philosophies of technology and behavioural economics, the authors argue that bias in AI technologies is a multi-faceted phenomenon that manifests in training datasets, algorithmic design, and broader social currents. It recommends an analytical formula for modeling these biases and the types of relationalities that AI technologies engender. The article concludes by reviewing proposed solutions to AI bias, arguing that visibility matters, and that both users and developers should be aware of potential biases and either avoid or eliminate them whenever possible.
- Wiener, N. (1988).* The human use of human beings: Cybernetics and society (No. 320). Da Capo Press.
- This book examines the implications of cybernetics, the study of the relationship between computers and the human nervous system, for education, law, language, science, and technology. It outlines Wiener’s complex vision which involved scenarios where machines would release people from relentless and repetitive drudgery in order to achieve more creative pursuits. It also outlined his realization of the danger of dehumanizing and displacement posed by his vision.
- Wigley, E. (2021). Do autonomous vehicles dream of virtual sheep? The displacement of reality in the hyperreal visions of autonomous vehicles. Annals of the American Association of Geographers, 1(1), 1-17.
- This article analyzes the modalities through which autonomous vehicles (AVs) ‘see’ their surroundings and the datasets they create through these sensory processes. Drawing on Jean Baudrillard’s notions of simulation and simulacra, it explores the ways in which AVs transform the real into a series of purified representations that then form the basis for future decisions regarding the operation and management of the city. There is a risk, he contends, that these generated models are mistaken for the reality they represent, producing hybrid geographies that overshadow the lived experiences of those within the city.
Chapter 29. The Complexity of Otherness: Anthropological Contributions to Robots and AI (Kathleen Richardson)
- Ali, S. (2019). “White crisis” and/as “existential risk,” or the entangled apocalypticism of artificial intelligence. Zygon Journal of Religion and Science, 54(1), 207–224. https://doi.org/10.1111/zygo.12498
- This paper presents a critique of Robert Geraci’s Apocalyptic artificial intelligence (AI) discourse. The author explores “white crisis,” a modern racial phenomenon with religious origins, in relation to the existential risk associated with apocalyptic AI. Adopting a decolonial and critical race-theory viewpoint, the author argues that the rhetoric of white crisis and apocalyptic AI should be understood as part of a trajectory of domination that the author terms “algorithmic racism.”
- Appadurai, A. (1986).* Introduction: Commodities and the politics of value. In A. Appadurai (Ed.), The social life of things: Commodities in cultural perspective. Cambridge University Press.
- This chapter argues that anthropologists should study ‘things’: instead of assuming that humans assign significance to things, anthropologists should consider how things take shape, acquire value, and move through space. The movement of things and commodities across different contexts sheds light on the social context they inhabit.
- Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.
- Automation has the potential to deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. This book presents the concept of the “New Jim Code:” a range of discriminatory designs that encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. The author makes a case for race as itself as a kind of technology, designed to sanctify social injustice in the architecture of everyday life.
- Birhane, A., & van Dijk, J. (2020, February). Robot rights? Let’s talk about human welfare instead. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 207-213).
- This paper questions the utility of the robot rights debate, instead arguing that the ethics of AI ought to center the concerns and needs of the humans subject to the outcomes of artificially intelligent agents. The authors assert that instead of discussing whether to blame artificially intelligent agents for their actions, the human individuals responsible for the algorithm’s creation and operations ought to be held responsible for the agent’s behavior.
- Boellstorff, T. (2008).* Coming of age in second Life: An anthropologist explores the virtually human. Princeton University Press.
- One of the most famous digital ethnographies, this book shows how virtual worlds can change ideas about identity and society. Based on two years of fieldwork in Second Life, living among and observing its residents just as anthropologists have traditionally done to learn about cultures in the real world, this ethnography shows how anthropological methods can be applied to virtual sociality.
- Cave, S. (2019).* Intelligence as ideology: Its history and future [Keynote Lecture]. Centre for Science and Policy Annual Conference. http://www.csap.cam.ac.uk/media/uploads/files/1/csap-conference-2019-stephen-cave-presentation.pdf
- This keynote lecture problematizes the concept of intelligence, showing how it is not only impossible to reliably measure but also – as the measurement of what it means to be human – became associated with evolutionary paradigms, colonial rule, and the ‘survival of the fittest.’ Intelligence importantly works to justify elite domination over others: the poor, women, people with disabilities, and so on.
- Coeckelbergh, M. (2021). Three responses to anthropomorphism in social robotics: Towards a critical, relational, and hermeneutic approach. International Journal of Social Robotics, 1-13.
- The author contrasts two existing normative positions on the anthropomorphization of social robots: naive instrumentalism and uncritical posthumanism. The author argues that both positions are limited in their conception of social robots, and posits a third approach centered on hermeneutical and relational interpretations that mediate between the two positions. They conclude by arguing that instead of seeing robots as tools or others, it is better to think about the specific contexts in which social robots are embedded and the meaning-making practices that surround them.
- Colloc, J. (2016). Ethics of autonomous information systems towards an artificial thinking. Les Cahiers du Numérique, 12(1-2), 187–211.
- This article, originally published in French, focuses on how to build autonomous machines using artificial intelligence (AI). The author compares the process of ethical decision-making on the part of humans with the potential cognitive capabilities of these machines. The author then considers the ethical implications of autonomous machines, specifically how such systems affect humanity.
- Costa, P., & Ribas, L. (2019). AI becomes her: Discussing gender and artificial intelligence. Technoetic Arts, 17(1-2), 171-193.
- This paper analyzes the feminization of digital assistants, connecting stereotypical perceptions of gender with the development of artificial intelligence. The authors comment on how fictional works relate to and advance these narratives. The authors then expand on how artificial intelligence may serve as a reflection of socio-culturally encoded biases.
- Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311–313. https://doi.org/10.1038/538311a
- This article argues that there is a blind spot in AI research: in spite of the rapid and widespread deployment of AI, agreed-upon methods to assess the sustained effects of such applications on human populations are lacking. The authors examine three dominant modes used to address the ethical and social risks of AI: compliance, values in design, and thought experiments. The authors argue for a fourth approach: a practical and broadly applicable social-systems analysis which thinks through all the possible effects of AI systems on all parties.
- Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer.
- This book emphasizes the ethical implications of artificial intelligence (AI) technologies and systems. The author addresses these issues through discussions of the design, construct, use, and integrity of these technologies and the team behind them. The author critiques the moral decisions of these autonomous systems, specifically the methodologies behind these creations and the moral, legal, and ethical values they uphold.
- Dourish, P. (2016). Algorithms and their Others: Algorithmic culture in context. Big Data & Society, 3(2). https://doi.org/10.1177%2F2053951716665128
- Using Niklaus Wirth’s 1975 formulation that “algorithms + data structures = programs” as a launching-off point, the author examines how an algorithmic lens shapes the way in which we might inquire into contemporary digital culture. Algorithms, once obscure objects of technical art, are integral to artificial intelligence today. The author explores what it means to adopt the algorithm as an object of analytic attention, showing what it shows and reveals.
- Forsythe, D. (2002).* Studying those who study us: An anthropologist in the world of artificial intelligence. Stanford University Press.
- This book presents an anthropological study of artificial intelligence and informatics, asking how expert systems designers imagine users and in turn, how humans interact with computers. It analyzes the laboratory as a fictive kin group that reproduces gender asymmetries, offering a reflexive ethnographic perspective on the cultural mechanisms that support the persistent male domination of engineering.
- Fügener, A., et al. (2021). Cognitive challenges in human–artificial intelligence collaboration: Investigating the path toward productive delegation. Information Systems Research.
- This experiment explores optimal collaboration techniques between human participants and an artificially intelligent algorithm. The results reveal that, although collaboration has the potential to improve task outcomes, performance gains occur only when an AI delegates subtasks to humans, and not the reverse. This work exposes human limitations in cooperating with algorithms, and highlights avenues for future algorithm design and education.
- Geertz, C. (1973).* Thick description: Toward an interpretive theory of culture. In The interpretation of cultures: Selected essays (pp. 3–32). Basic Books.
- This essay articulates the central method of interpretive anthropology, explaining how ethnographers write and think about cultural situations. Contrasting ‘thick’ description – which includes cultural background and layered meanings – with ‘thin’ or merely factual accounts, the author shows how ethnographers bring in context to explain how behavior becomes meaningful.
- Haraway, D. (1991).* A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, cyborgs and women: The reinvention of nature (pp. 149–181). Routledge.
- This essay articulates a feminist theory of the cyborg: a half human half machine hybrid. The figure of the cyborg dissolves the boundaries between nature and artifice, animal and human, and physical and non-physical. The authorinterprets this as an opportunity for feminists to think beyond the duality of identity politics and form new political alliances.
- Helmreich, S. (2000).* Silicon second nature: Culturing artificial life in a digital world. University of California Press.
- This book presents an ethnographic study of the people and programs connected with an unusual hybrid of computer science and biology. Through detailed dissections of artifacts in the context of artificial life research, the author argues that the scientists working on this see themselves as masculine gods of their cyberspace creations, bringing longstanding mythological and religious tropes concerning gender, kinship, and race into their research domain.
- Hersh, M. A. (2016). Engineers and the other: The role of narrative ethics. AI & Society, 31(3), 327–345. https://doi.org/10.1007/s00146-015-0594-7
- This article uses two case studies to argue for the importance of macroethics. The author highlights that acknowledging cultural diversity is crucial in advocating for the collective responsibility of highly unethical artificial intelligence (AI) technologies. The author argues that ethical behavior should not merely be seen as an individual effort or responsibility. Instead, it should be considered as a collective action.
- Hicks, M. (2017).* Programmed inequality: How Britain discarded women technologists and lost its edge in computing. MIT Press.
- Drawing on government files, personal interviews, and the archives of major British computer companies, this book exposes the myth of technical meritocracy by tracing how computer labor was masculinized between the 1940s and today. Women were central to the growth of high technology from World War II to the 1960s, when computing experienced a gender flip. This development caused a labor shortage and severely impeded both the growth of the British computer industry and the success of the nation as a whole.
- Jaume-Palasi, L. (2019). Why we are failing to understand the societal impact of artificial intelligence. Social Research, 86(2), 477–498.
- This article aims to address the societal impact of algorithmic systems by considering how these technologies are representing the ideas and norms of society. The author argues that artificial intelligence (AI) does not understand and embody an individual, which suggests the risks of increased stereotypes and discrimination through the use of AI. The author argues that viewing AI as a type of societal infrastructure is needed in order to adequately understand its impact.
- Kelty, C. (2008).* Geeks, social imaginaries, and recursive publics. Cultural Anthropology, 20(2), 185–214.
- Based on fieldwork conducted in three countries, this article argues that the mode of association specific to “geeks” (hackers, lawyers, activists, and IT entrepreneurs) on the Internet is that of a “recursive public sphere” that is constituted by a shared imaginary of the technical and legal conditions of possibility for their own association. Geeks imagine their social existence and relations as much through technical practices (hacking, networking, and code writing) as through discursive argument (rights, identities, and relations), rendering the “right to tinker” with software a form of free speech.
- Latour, B. (1993).* We have never been modern (C. Porter, Trans.). Harvard University Press.
- This philosophical text defines modernity in terms of the separation between nature and society, human and thing, reality and artifice. The author shows that this separation is theoretically powerful in science but does not play out in practice: an anthropological look at scientific practice reveals that everything is always already hybrid – reality and artifice cannot be separated. The author argues that the hybridity of nature and culture is central to the success of technoscientific practices.
- Liberati, N., & Nagataki, S. (2018). Vulnerability under the gaze of robots: Relations among humans and robots. AI & Society, 34(2), 333–342.
- The authors argue that any AI agents designed to replicate human-like activities are likely to fail if they do not embody the human body. The authors additionally examine the vulnerability of robots, arguing that interactions between humans and robots without similar body forms point to an unequal cohabitation between the two.
- Mamak, K. (2021). Whether to save a robot or a human: On the ethical and legal limits of protections for robots. Frontiers in Robotics and AI, 8. https://doi.org/10.3389/frobt.2021.712427
- Expanding on the robot rights debate, the author deliberates over the question of prioritizing human or robot lives in the case of a moral dilemma. The author contends that human lives ought to be prioritized above those of anthropomorphic robots, which becomes increasingly difficult as artificial entities become more human-like.
- Miller, D., & Horst, H. (2012). The digital and the human: A prospectus for digital anthropology. In H. Horst & D. Miller (Eds.), Digital anthropology (pp. 3–38). Bloomsbury Publishing.
- This chapter articulates a vision for digital anthropology, defining anthropology as a discipline occupied with understanding what it is to be human and how humanity manifests differently across cultures, and the digital as everything that can be reduced to binary code. The authors argue for ethnographic work that emphasizes the continuity between the digital and the non-digital, the materiality of the digital, and the ultimately deeply local cultural ways in which technologies are received.
- Negri, S. M. A. (2021). Robot as legal person: Electronic personhood in robotics and artificial intelligence. Frontiers in Robotics and AI, 8. https://doi.org/10.3389/frobt.2021.789327
- This paper addresses the European Union’s proposal to confer a form of personhood to artificially intelligent beings. The author notes that many proponents of this idea cite precedents of corporations being granted a form of personhood. The author highlights the problems and risks associated with viewing corporations as legal persons, and critically compares this with the case of artificially intelligent entities.
- Noble, S. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
- This book uses algorithmic search engines to show how data discrimination works. The combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color and especially Black women.
- Richardson, K. (2015).* An anthropology of robots and AI: Annihilation anxiety and machines. Routledge.
- This ethnography of robot-making in labs at the Massachusetts Institute of Technology (MIT) examines the cultural ideas that go into the making of robots, and the role of fiction in co-constructing the technological practices of the robotic scientists. The author charts the move away from the “worker” robot of the 1920s to the “social” one of the 2000s, using anthropological theories to describe how robots are reimagined as companions, friends and therapeutic agents.
- Robertson, J. (2017).* Robo sapiens Japanicus: Robots, gender, family, and the Japanese nation. University of California Press.
- An ethnography and sociocultural history of governmental and academic discourse of human-robot relations in Japan, this book explores how actual robots – humanoids, androids, and animaloids – are “imagineered” in ways that reinforce the conventional sex/gender system, the political-economic status quo, and a conception of the “normal” body. Asking whether “civil rights” should be granted to robots, the author interrogates the notion of human exceptionalism.
- Sætra, H. S. (2021). Challenging the neo-anthropocentric relational approach to robot rights. Frontiers in Robotics and AI, 8. https://doi.org/10.3389/frobt.2021.744426
- This work interrogates whether and how we ought to ascribe moral standing to robotic agents, and how current work takes a largely anthropocentric view of doing so. Though relationalism has sought to overcome this human-centric focus, the author argues that relational approaches to understanding robot rights inherently privilege the values and perspectives of humans, thus maintaining the anthropocentric views common to the literature.
- Sciutti, A., et al. (2018). Humanizing human-robot interaction: On the importance of mutual understanding. IEEE Technology and Society Magazine, 37(1), 22-29.
- This paper claims that as robots become more pervasive, the burden of managing these new modes of interaction should not be placed entirely on humans. Instead, robotics research ought to prioritize developing robots that are considerate and aware of human needs. The authors argue that this mutual understanding between humans and machines will facilitate the integration of robots into society.
- Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2). https://doi.org/10.1177%2F2053951717738104
- This article articulates how algorithms might be approached ethnographically: as heterogeneous and diffuse sociotechnical systems, rather than rigidly constrained and procedural formulas. This involves thinking of algorithms not “in” culture, but “as” culture: part of broad patterns of meaning and practice that can be engaged with empirically. Practical tactics for the ethnographer then do not depend on pinning down a singular “algorithm” or achieving “access,” but rather work from the partial and mobile position of an outsider.
- Shibuya, K. (2020). Digital transformation of identity in the age of artificial intelligence. Springer.
- This book investigates the digital transformation of identity in artificial intelligence (AI). The author uses a broad range of disciplines, including ethics, philosophy, and computer science, to examine the nature of identity in humans contrasted by AI technologies.
- Sparrow, R. (2020). Robotics has a race problem. Science, Technology, & Human Values, 45(3), 538-560.
- This paper draws on research suggesting that humans are willing to assign properties of race when anthropomorphizing social robots. The author notes that this surfaces problems of race-representation: ensuring that Whiteness is not seen as a default and that robots of color are not seen as subservient entities. The author suggests that designing robots that lack markers of race is a worthy avenue for future research.
- Suchman, L. (2007).* Human-machine reconfigurations: Plans and situated actions. Cambridge University Press.
- This book shows that debates over the status of human-like machines – whether they are ‘alive’ or not, different from the human or not – are improved when the question shifts to how humans and machines are enacted as similar or different in practice, and with what consequences. Calling for a move away from essentialist divides, the author argues for research aimed at tracing the differences within specific sociomaterial arrangements.
Chapter 30. Calculative Composition: The Ethics of Automating Design (Shannon Mattern)
- Al-Halah, Z., et al. (2017). Fashion forward: Forecasting visual style in fashion. IEEE International Conference on Computer Vision, Venice, Italy. 10.1109/ICCV.2017.50
- This work proposes using machine learning methods to predict fashion trends before they occur. Unsupervised learning is used to identify styles from fashion images, and then supervised learning is used to forecast these trends over time, which can be used to hypothesize style mixtures, discover style dynamics, and predict popular style attributes. Using an Amazon dataset, the authors show that visual analysis can greatly improve fashion forecasting.
- Allam, Z., & Dhunny, Z. A. (2019). On big data, artificial intelligence, and smart cities. Cities, 89, 80-91. https://doi.org/10.1016/j.cities.2019.01.032
- This paper discusses the various ways artificial intelligence can contribute to urban development to improve sustainability and livability while supporting other dimensions of city planning including culture, metabolism, and governance. This paper is specifically meant to target policymakers, data scientists, and engineers who are interested in integrating artificial intelligence and big data into smart city designs.
- Altavilla, A., & Blanco, E. (2020). Are AI tools going to be the new designers? A taxonomy for measuring the level of automation of design activities. Proceedings of the Design Society: DESIGN Conference, Online. doi:10.1017/dsd.2020.286
- This paper outlines a taxonomy to classify the use of AI tools in engineering design and rethink the division of tasks between designers and AI. Building off of the levels of automation framework from cognitive engineering, the authors characterize design automation based on the level of autonomy the designer has and the design activity.
- Bratton, B. (2015).* Lecture on A.I. and cities: Platform design, algorithmic perception, and urban geopolitics. Benno Premsela Lecture Series. https://bennopremselalezing2015.hetnieuweinstituut.nl/en/lecture-ai-and-cities-platform-design-algorithmic-perception-and-urban-geopolitics
- Bratton argues that the project of creating a smart city will be futile in its attempt to create futuristic living conditions for humans, but instead will become habitats for future insects. This thesis was predicted in part because of the example of the failed Sanzhi Pod City in Taipei, which was overtaken by several species of orchid mantis.
- Bricout, J., et al. (2021). Exploring the smart future of participation: Community, inclusivity, and people with disabilities. International Journal of E-Planning Research (IJEPR), 10(2), 94-108. https://doi.org/10.4018/IJEPR.20210401.oa8
- This paper explores the use of technology to promote civic engagement for people with disabilities, specifically its potential for the future of smart cities. The authors examine the challenges of virtual engagement in civic activities and propose a framework for better participation of people with disabilities in future smart communities.
- Camburn, B., et al. (2020). Machine learning-based design concept evaluation. Journal of Mechanical Design, 142(3), 1-15. 10.1115/1.4045126
- The authors propose a machine learning method to evaluate engineering design concepts that first extracts data and then calculates metrics that assign each concept a creativity rating. They empirically test this system using crowd-sourced design concepts and comparing the algorithm rating to human expert raters. The difference in human and algorithm selected designs highlights potential human bias in concept selection.
- Carpo, M. (2017).* The second digital turn: Design beyond intelligence. MIT Press.
- In this book, Carpo argues that tools from the first digital turn in architecture that promoted significant development in styles, such as the use of curving lines and surfaces, have now promoted a second digital turn that impacts the way designers develop ideas. Machine learning has been employed to create extremely complex designs that humans could not think of themselves.
- Carta, S. (2019). Big data, code and the discrete city: Shaping public realms. Routledge.
- This book provides an overview of the impact of digital technologies on public space, and actors involved in designing public space, policymakers, and individual citizens. It presents the idea of a new environment, which is comprised of the physical environment, people, and software, that continually adapt and mutually influence each other.
- de Waal, M., & Dignum, M. (2017). The citizen in the smart city. How the smart city could transform citizenship. it-Information Technology, 59(6), 263-273. https://doi.org/10.1515/itit-2017-0012
- This article examines the relationship between smart cities and citizenship, introducing three potential smart city visions. First, The Control room is a city with a collection of infrastructure and services. Second, the Creative City is focused on local and regional innovations. Third, the Smart Citizens city deals with the potential of a smart city that has an active political and civil community.
- Gunkel, D. J. (2018). Hacking cyberspace. Routledge.
- Gunkel argues that metaphors used to describe new technologies actually inform how those technologies are created. Gunkel develops a view that considers how designers employ discourse in their technological development.
- Gyory, J. T., et al. (2022). Human versus artificial intelligence: A data-driven approach to real-time process management during complex engineering design. Journal of Mechanical Design, 144(2), 1-13. https://doi.org/10.1115/1.4052488
- This paper conducts a study to compare how well an AI agent can manage an engineering team during a complex design procedure. The authors analyze various aspects of the management method of AI compared to humans and conclude that AI can match or slightly surpass human management in terms of performance. The findings suggest that AI and humans can cooperate to improve design procedures.
- Hebron, P. (2017, April 26).* Rethinking design tools in the age of machine learning. Medium. https://medium.com/artists-and-machine-intelligence/rethinking-design-tools-in-the-age-of-machine-learning-369f3f07ab6c
- Hebron examines the widespread availability of technological creative tools that allow an individual to create on a computer or mobile phone. He argues that machine learning should aim to make creative processes easier for human actors, but not do any creative work themselves, in order to preserve human originality.
- Jiao, R., et al. (2021). Design engineering in the age of Industry 4.0. Journal of Mechanical Design, 143(7), 1-25. https://doi.org/10.1115/1.4051041
- This review article proposes the idea of “Design Engineering 4.0” to describe how the engineering design process will evolve in response to Industry 4.0. They argue that the design for these new opportunities will require a holistic view that considers the stakeholders, the product, and the process. They review how engineering design has changed in response to several changes such as end-to-end digital integration, data-driven design, intelligent design automation, and more.
- Johnson, P. A., et al. (2020). Type, tweet, tap, and pass: How smart city technology is creating a transactional citizen. Government Information Quarterly, 37(1), 101414. https://doi.org/10.1016/j.giq.2019.101414
- This article asks the question of whether the use of technology acts as a medium for a transactional relationship between governments and citizens. The authors highlight four models: type, tweet, tap, and pass, using relevant literature and examples to flesh out the concept. They propose that governments consider the impact of a transactive relationship before they implement smart design technology.
- Karan, E., & Asadi, S. (2019). Intelligent designer: A computational approach to automating design of windows in buildings. Automation in Construction, 102, 160-169. https://doi.org/10.1016/j.autcon.2019.02.019
- The process of designing buildings is becoming increasingly computerized. This paper describes a new system, the Intelligent Designer, that is capable of understanding and learning clients’ expectations and generating valid structural designs. The authors demonstrate this approach through a window designing experiment.
- Liu, L., et al. (2019). Toward AI fashion design: An attribute-GAN model for clothing match. Neurocomputing, 341, 156-167. https://doi.org/10.1016/j.neucom.2019.03.011
- This paper highlights a new generative adversarial network (GAN) that can be used to generate clothing matches for fashion design based on clothing attributes such as color, texture, and shape. The authors also contribute a manually collected database of clothing attributes, which the GAN was trained on, and provide experimental results to support the effectiveness of their approach with other state-of-the-art methods.
- Luce, L. (2019).* Artificial intelligence for fashion: How AI is revolutionizing the fashion industry. Apress.
- This reference work provides real life examples of how AI is employed in the fashion industry, and the pain points companies are using AI to address. It provides a guide for designers, managers, and executives on how AI is impacting the field of fashion.
- Mattern, S. (2017).* A city is not a computer. Places Journal. https://placesjournal.org/article/a-city-is-not-a-computer/
- The author critiques the totalizing idea of cities as computers employed by technology companies, arguing that this practice ignores the information provided by urban designers and scholars who have investigated how cities work for decades.
- Mattern, S. (2018).* Databodies in codespace. Places Journal. https://placesjournal.org/article/databodies-in-codespace/
- The author discusses the efforts of technology companies through efforts such as the Human Project to quantify the human condition. She criticizes this goal in light of methodological and ethical risks of allowing private companies access to the amount of personal data required by these projects.
- Negroponte, N. (1973).* The architecture machine: Toward a more human environment. MIT Press.
- This book provides a forward looking and optimistic account of what will occur when genuine human-machine dialogue is achieved, and man is able to work together with AI towards mutual goals. Negroponte uses systems theory philosophy to examine issues that can arise in these relationships.
- O’Donnell, K. M. (2018, March 2).* Embracing artificial intelligence in architecture. AIA. https://www.aia.org/articles/178511-embracing-artificial-intelligence-in-archit
- The author argues that architects should learn about data and its application in order to work towards the incorporation of AI in their field, as development in this area will strengthen the profession.
- Raina, A., et al. (2019). Learning to design from humans: Imitating human designers through deep learning. ASME Journal of Mechanical Design, 141(11), 1-11. https://doi.org/10.1115/1.4044256
- This work posits that combining human problem-solving strategies with AI agents’ ability to access large computational resources will create a synergetic team. Focused on the field of engineering design, the authors propose a two-step framework that learns design strategies and rules from humans and uses deep learning to generate design ideas based on these strategies. They compare AI-designed trusses to human-designed trusses to show that AI agents can learn effective design strategies.
- Ranchordás, S. (2020). Nudging citizens through technology in smart cities. International Review of Law, Computers, & Technology, 34(3), 254-276. https://doi.org/10.1080/13600869.2019.1590928
- Several previous works have shown that systematically nudging citizens can improve road safety and reduce night crime, and, when incorporated into smart cities, has the potential to further promote positive civic engagement and sustainability goals. However, these well-intended nudges also raise legal and ethical issues. This paper offers an interdisciplinary approach to analyzing the impact of collecting and using these data to influence the behavior of those in smart cities.
- Retsin, G. (2019). Discrete: Reappraising the digital in architecture. John Wiley & Sons.
- This book discusses the impact of two decades of digital experimentation in architecture, arguing that the digital focus on style and differentiation seems out of touch with a new generation of architects amid a global housing crisis. This book tracks a new body of work that uses digital tools to create discrete parts that can be used toward aims of open-ended and adaptable architecture.
- Ridell, S. (2019). Mediated bodily routines as infrastructure in the algorithmic city. Media Theory, 3(2), 27-62. https://journalcontent.mediatheoryjournal.org/index.php/mt/article/view/92
- The author argues that there is a lack of development in the study of how bodies are mediated in the context of digital urban life. The article examines mediated bodily habits and routines, arguing that they are important to the infrastructure of a smart city.
- Sagredo-Olivenza, I., et al. (2017). Trained behavior trees: Programming by demonstration to support AI game designers. IEEE Transactions on Games, 11(1), 5-14. https://doi.org/10.1109/TG.2017.2771831
- This paper introduces a new method for game designers to develop and test the behaviors of non-player characters in a video game with programming by demonstration and artificial intelligence. The authors present trained behavior trees, a technique that is widely used in AI game development to create traces of the character behaviors in different scenarios. They combine this feature with programming by demonstration to allow game designers to fine-tune the expected responses in each situation.
- Samuel, S. (2022, April 14). A new AI draws delightful and not-so-delightful images. Vox. https://www.vox.com/future-perfect/23023538/ai-dalle-2-openai-bias-gpt-3-incentives
- This article raises some concerns regarding the latest image generative model developed by OpenAI. DALL-E 2 is the company’s latest AI agent that draws an image from the given text description. The author shows the model may reinforce stereotypes in its designs. The incentives in the AI industry are pointed out as one of the challenges in development of models with complete ethical evaluation.
- Särmäkari, N., & Vänskä, A. (2021). ‘Just hit a button!’ – Fashion 4.0 designers as cyborgs, experimenting and designing with generative algorithms. International Journal of Fashion Design, Technology and Education. 10.1080/17543266.2021.1991005
- This paper explores the digitization of fashion design and asks how this automation will impact authorship and professional boundaries for fashion designers. The authors focus on two fashion designer case studies to highlight how human design knowledge can be combined with computer generated output during the design process. They discuss two forms of algorithmic fashion design: generative clothing development and AI-aided sketching.
- Serra, G., & Miralles, D. (2021). Human-level design proposals by an artificial agent in multiple scenarios. Design Studies, 76. https://doi.org/10.1016/j.destud.2021.101029.
- This paper introduces a language to represent tool designs and develops an evolutionary algorithm that utilizes the language to design novel solutions for a variety of tasks. It is shown that AI can match or even surpass humans in terms of novelty and performance. The authors suggest collaborations of AI and humans using the introduced language as a future direction.
- Song, B., et al. (2020). Toward hybrid teams: A platform to understand human-computer collaboration during the design of complex engineered systems. In Proceedings of the design society: DESIGN conference (Vol. 1, pp. 1551-1560). Cambridge University Press.
- This paper analyzes the key properties of a good design research platform, which is the software used for human-computer collaboration in design procedures. Then, based on the findings of this analysis, a platform is introduced for unmanned aerial vehicle (UAV) design.
- Steenson, M. W. (2017).* Architectural intelligence: How designers and architects created the digital landscape. MIT Press.
- This book provides a historical overview of the overlap between the fields of architectural design and computer science.
- Thomassey, S., & Zeng, X. (Eds.). (2018). Artificial intelligence for fashion industry in the big data era. Springer.
- This book gives an overview of current issues in the fashion industry, such as the suitability of existing AI implementation. Each chapter gives an example of a data-driven AI application to all sectors of the fashion industry, including design, manufacturing, supply chains, and retail.
- Vetrov, Y. (2017, January 3).* Algorithm-driven design: How artificial intelligence is changing design. Smashing Magazine. https://www.smashingmagazine.com/2017/01/algorithm-driven-design-how-artificial-intelligence-changing-design/
- The author argues that designers should utilize artificial intelligence in order to maximize capabilities and allow themselves to prioritize tasks with ease. To do this, she recommends that designers support more digital platforms.
- Viros Martin, A., & Selva, D. (2019). From design assistants to design peers: Turning Daphne into an AI companion for mission designers. In AIAA Scitech 2019 Forum. Aerospace Research Central. https://doi.org/10.2514/6.2019-0402.
- This paper describes an updated version of Daphne, a virtual assistant for architecting satellite systems, that can proactively: (1) inform users of new design spaces to explore, (2) diversify user searches, and (3) function as a live recommender system to help users modify designs. This paper describes the resulting changes to user interaction and workflow and provides a discussion on the use case scenarios that could best utilize these updates.
- Wang, Z. W. (2020). Real design practice, real design computation. International Journal of Architectural Computing. https://doi.org/10.1177/1478077120958165
- This article presents several case studies in order to investigate the use of computational, design-oriented services in the architecture industry. The article examines the differing opinions on the use of computation in the field, describes the experience of a design firm, and the implications of this case study on the industry. The purpose of this paper is to address the gap between theoretical implications of computational design and the realities of the architecture business.
- Yigitcanlar, T., et al. (2019). Can cities become smart without being sustainable? A systematic review of the literature. Sustainable Cities and Society, 45, 348-365. https://doi.org/10.1016/j.scs.2018.11.033
- This article investigates the question of whether smart city policy and sustainability outcomes are entwined, by reviewing literature that asserts a limitation on the ability of smart cities to achieve sustainability. The authors argue that cities cannot be smart unless they are designed to be sustainable.
- Zhang, G., et al. (2021). A cautionary tale about the impact of AI on human design teams. Design Studies, 72, 100990. https://doi.org/10.1016/j.destud.2021.100990.
- This paper explores the integration of AI technologies into human design teams. The authors found that AI boosted the initial performances of low-performing teams but decreased the performance of high-performing teams.
Chapter 31. AI and the Global South: Designing for Other Worlds (Chinmayi Arun)
- Ajunwa, I. (2020).* The paradox of automation as anti-bias intervention. Cardozo Law Review, 41(5), 1671-1742.
- This article’s central claim is that bias is introduced in the hiring process due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The author observes the lack of legal frameworks that consider the emerging technological capabilities of hiring tools, which make it difficult to detect disparate impact, and argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. The author also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision.
- Arora, P., & Rangaswamy, N. (2013). Digital leisure for development: Reframing new media practice in the global South. Media, Culture & Society, 35(7), pp. 898- 905.
- This article reconsiders the ways in which AI is used in the Global South. In contrast to other regions where AI is used in part for leisure i.e. for entertainment on social media algorithms, AI is often associated with a development goal or something that will advance economic growth and efficiency.. The authors explain that the advent of technologies, AI or otherwise, should also be considered for the use of leisure and that this aspect is an equally important consideration for equity in technology.
- Birhane, A. (2020). Algorithmic colonization of Africa. Scripted, 17(2), 389–409. https://doi.org/10.2966/scrip.170220.389
- This article compares large technology corporations from the West to traditional colonialism. The author argues that while early forms of colonialism depended on national governments, algorithmic colonization is now driven by corporations. The previous violent forms of colonialism have now been replaced with technological solutionism, which threatens to undermine the development efforts of African countries. The author also considers the importance of data collection as it relates to this.
- Casilli, A. A. (2017). Digital labor studies go global: Toward a digital decolonial turn. International Journal of Communication, 11, 3934–3954. https://ijoc.org/index.php/ijoc/article/view/6349
- This article argues that the global division of digital labor, where most online platform workers are located in developing countries and most employers are situated in advanced economies, should not be equated with the term “colonialism,” which is meant to describe the efforts of colonial empires to dominate other peoples. The author argues that “coloniality,” a term that originated in Latin American decolonial thinking, describes the power relations remaining in post-colonial societies and is, therefore, more applicable to the current global division of digital labor.
- Couldry, N., & Mejias, U. A. (2019).* Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), 336-349.
- This article proposes that the data relations process is best understood through the history of colonialism. The authors argue that data relations create a new form of data colonialism that normalizes the exploitation of human beings through data, just as historic colonialism appropriated territory and resources and ruled subjects for profit. The authors further argue that data colonialism paves the way for a new stage of capitalism whose outlines can only be glimpsed: the capitalization of life without limit.
- Couldry, N., & Mejias, U. (2019). Making data colonialism liveable: How might data’s social order be regulated? Internet Policy Review, 8(2). https://doi.org/ 10.14763/2019.2.1411
- This paper argues that while the modes, intensities, scales, and contexts of dispossession have changed, the underlying drive of today’s data colonialism remains the same: to acquire “territory” and resources from which economic value can be extracted by capital. The authors assert that injustices embedded in this system need to be made “liveable” through a new legal and regulatory order.
- Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data Is colonizing human life and appropriating it for capitalism. Stanford University Press.
- This book outlines what the authors call “data colonialism.” Data colonialism makes two assertions: 1) that our everyday ‘data relations’ (the emerging social form through which data colonialism becomes stabilized) result in appropriation on a form and scale that mirrors processes of historical colonialism, and 2) that this new colonialism is propelled by a combination of the imperatives of capitalism and radical changes to our communications infrastructures. The authors acknowledge that ‘historical’ colonialism continues to endure, and that the physical violence associated with historical colonialism is not reflected in data colonialism. They suggest, however, that colonialism’s enduring function (enacting illegitimate appropriation and exploitation by redefining human relations so that dispossession seems natural) can only really be grasped through the lens of coloniality, which captures the particular structural changes associated with its digital form.
- Crawford, K., & Joler, V. (2018). Anatomy of an AI system. https://anatomyof.ai/img/ai-anatomy-publication.pdf
- The authors analyze the labor and natural resources necessary for the development of artificial intelligence using the Marxian dialectic of economic subject and object. They identify three moments in this process: creating devices to support AI technologies, the internet infrastructures that collect the data necessary for AI, and the disposal of these devices.
- Crawford, K. (2021). The atlas of AI. Yale University Press.
- The author argues that artificial intelligence is an extractivist technology. The author describes how AI requires vast amounts of natural resources and an extraordinary amount of labor, largely from workers in precarious conditions, to operate while harvesting data from millions of individuals worldwide. The author argues that, far from being an objective or neutral technology, AI currently serves primarily the interests of big corporations.
- Dirlik, A. (2007).* Global South: Predicament and promise. The Global South, 1(1), 12-23.
- This essay explores possibilities for the establishment of a new global order where the Global South may play a central part. The author traces the emergence of the concept of the Global South historically, with special attention to its antecedents in the popular term of the 1960s and 1970s, “Third World.” The author suggests that while the “Third World” is no longer a viable concept, geopolitically or as a political project, it may still provide present inspiration for similar projects that may render the Global South into a force in the reconfiguration of global relations.
- Escobar, A (2018) Designs for the pluriverse: Radical interdependence, autonomy, and the making of worlds. Duke University Press.
- The author argues for the decolonization of design through collaborative practices that are place-based, resist dependence on markets, and are more accountable to the needs of communities. The author positions design as an ontological “praxis of world-making” and argues that a significant shift is needed from design as a functionalist, rationalist, and industrial tradition, to one that is relational and appropriated by subaltern communities. Design for the “pluriverse” is an approach which embraces difference and becomes a tool for reimagining and reconstructing local worlds.
- Georgiou, M. (2019). City of refuge or digital order? Refugee recognition and the digital governmentality of migration in the city. Television & New Media, 20(6), 600-616.
- This article analyses the digital governmentality of the city of refuge, arguing that digital infrastructures support refugees’ new life in the European city while also normalizing the conditionality of their recognition as humans and as citizens-in-the-making. The author argues that a digital order requires a ‘performed refugeeness’ as a precondition for recognition, meaning a swift move from abject vulnerability to resilient individualism.
- Graham, M., & Foster, C. (2017). Reconsidering the role of the digital in global production networks. Global Networks, 17(1), 68–88. https://doi.org/10.1111/glob.12142
- This paper proposes an update to the literature on global production networks (GPN) to explain the integration of digital information with communication technologies in global production. The authors review three main categories of GPN literature: embeddedness, value, and networks. They propose expanding the GPN literature to encompass network diversity and infrastructures, digitally-driven shifts in governance, and the power of non-human actors.
- Graham, M., et al. (2018). Could online gig work drive development in lower-income countries? In H. Galperin & A. Alarcon (Eds.), Future of work in the global south (pp. 8–11). International Development Research Centre. https://ora.ox.ac.uk/objects/uuid:8a414783-5df5-45ad-8ac4-fb921bab9e15
- In this policy report, researchers from the Oxford Internet Institute analyze the potential impact of online gig work on international development. The authors argue that regulating the international labor market is challenging. However, without any sort of regulation, the market, as it is, creates precarious forms of employment that could produce harm, especially for individuals from vulnerable populations. The authors propose targeting regulations for the handful of countries that request this type of labor, usually countries with advanced economies.
- Hagerty, A., & Rubinov, I. (2019). Global AI ethics: A review of the social impacts and ethical implications of artificial intelligence. arXiv. arXiv:1907.07892
- This article calls for rigorous ethnographic research to better understand the social impacts of AI around the world. Global, on-the-ground research is particularly critical to identify AI systems that may amplify social inequality in order to mitigate potential harms. The authors argue that a deeper understanding of the social impacts of AI in diverse social settings is a necessary precursor to the development, implementation, and monitoring of responsible and beneficial AI technologies, and forms the basis for meaningful regulation of these technologies.
- Hicks, J. (2020). Digital ID capitalism: How emerging economies are re-inventing digital capitalism. Contemporary Politics, 26(3), 330–350. https://doi.org/10.1080/13569775.2020.1751377
- This article adds to the literature on digital capitalisms by introducing a new state-led model called ‘digital ID capitalism’. Describing how the system works in India, the author explains how businesses make money from the personal data collected and draws some of its elements into traditional political economy concerns with the relationships between state, business, and labor.
- Irani, L., et al. (2010). Postcolonial computing: A lens on design and development. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1311–1320.
- This article describes an alternative sensibility to ‘design for development’, which asserts a series of questions and concerns inspired by the conditions of post-coloniality. In particular, the authors’ approach involves recognition of four key shifts: generative models of culture, development as a historical program, uneven economic relations, and cultural epistemologies. The authors suggest that this approach is better suited to acknowledging how all design research and practice is culturally located and power laden, and that seeing how design is culturally specific allows designers to broaden notions of what counts as “good design.”
- Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & Class, 60(4), 3-26.
- This article proposes a conceptual framework for how the United States is reinventing colonialism in the Global South through digital technology. Using South Africa as a case study, the author argues that US multinationals exercise imperial control at the architecture level of the digital ecosystem: software, hardware and network connectivity, which then gives rise to related forms of domination.
- Madianou, M. (2019). Technocolonialism: Digital innovation and data practices in the humanitarian response to refugee crises. Social Media and Society, 5(3), 1-13.
- This article introduces the concept of “technocolonialism” to capture how the convergence of digital developments with humanitarian structures and market forces reinvigorates and reshapes colonial relationships of dependency. The author argues that the concept of technocolonialism shifts the attention to the constitutive role that data and digital innovation play in entrenching power asymmetries between refugees and aid agencies and ultimately inequalities in the global context.
- Madianou, M. (2019). The biometric assemblage: Surveillance, experimentation, profit, and the measuring of refugee bodies. Television & New Media, 20(6), 581-599.
- This article analyzes biometrics, artificial intelligence (AI), and blockchain as part of a technological assemblage, which the author names ‘the biometric assemblage.’ The author argues that the biometric assemblage accentuates asymmetries between refugees and humanitarian agencies and ultimately entrenches inequalities in a global context.
- Mahler, A. G. (2017).* Beyond the colour curtain. In K. Bystrom & J. R. Slaughter (Eds.), The Global South Atlantic (pp. 99-123). Fordham University Press.
- This essay traces the roots of the contemporary notion of the Global South to the ideology of an influential but largely forgotten Cold War alliance of liberation movements from Africa, Asia, and Latin American called the Tricontinental. The author argues that tricontinentalism – the ideology disseminated among the international radical Left through the Tricontinental’s expansive cultural production – revised a specifically black Atlantic resistant subjectivity into a global vision of subaltern resistance that is resurfacing in contemporary horizontalist approaches to cultural criticism such as the global south. In this way, the author proposes the Global South Atlantic as a particularly useful paradigm that not only inherently recognizes the black Atlantic foundations of the Global South but also calls contemporary solidarity politics into accountability to these intellectual roots.
- Milan, S., & Treré, E. (2019).* Big Data from the South(s): Beyond data universalism. Television & New Media, 20(4), 319-335.
- This article introduces the tenets of a theory of datafication, calling for a de-Westernization of critical data studies, in view of promoting a reparation to the cognitive injustice that fails to recognize non-mainstream ways of knowing the world through data. The authors situate the “Big Data from the South” research agenda as an epistemological, ontological, and ethical program and outlines five conceptual operations to shape this agenda.
- Mohamed, S., et al. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(4), 659–684. https://doi.org/10.1007/s13347-020-00405-8
- This paper presents an overview of several decolonial theories, studying the historical forces behind current power relations and the situation of postcolonial societies. The authors analyze how artificial intelligence algorithms perpetuate some of these power relations at the expense of marginalized communities and to the benefit of corporations and their interests.
- Ricaurte, P. (2019).* Data epistemologies, the coloniality of power, and resistance. Television & New Media, 20(4), 350-365.
- This article develops a theoretical model to analyze the coloniality of power through data and explores the multiple dimensions of coloniality as a framework for identifying ways of resisting data colonization. The author suggests possible alternative data epistemologies that are respectful of populations, cultural diversity, and environments.
- Richardson, R., et al. (2019).* Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online, 192(94), 192–233.
- The authors analyze thirteen jurisdictions that have used or developed predictive policing tools while under government commission investigations or federal court-monitored settlements, consent decrees, or memoranda of an agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, the authors examine the link between unlawful and biased police practices and the data available to train or implement these systems. The authors argue that deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed or unlawful predictions, which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system.
- Santos, B. D. S. (2016).* Epistemologies of the South and the future. From the European South: A Transdisciplinary Journal of Postcolonial Humanities, 1, 17-29. http://europeansouth.postcolonialitalia.it/journal/2016-1/3.2016-1.Santos.pdf
- This article puts forward epistemologies of the South as resting on the idea that current theoretical thinking in the global North has been based on the idea of an abyssal line. The author proposes a definition of ‘epistemologies of the South’ as a crucial epistemological transformation is required in order to reinvent social emancipation on a global scale, evoking plural forms of emancipation not simply based on a Western understanding of the world.
- Sambasivan, N., & Holbrook, J. (2018). Toward responsible AI for the next billion users. Interactions, 26(1), 68–71. https://doi.org/10.1145/3298735
- This article seeks to highlight the important considerations to be made when designing systems that use AI in the Global South. The authors explore the ethical considerations that need to be made as well as the possible technical and social constraints of which designers should be mindful.
- Segura, M. S., & Waisbord, S. (2019). Between data capitalism and data citizenship. Television & New Media, 20(4), 412-419.
- This article argues that datafication and opposition to datafication in the South do not develop exactly as in the North, given huge political, economic, social, and technological differences in the context of the expansion of digital capitalism. The authors analyze dimensions of data activism in Latin America, discuss the Global South as the site of counter-epistemic and alternative practices, and question whether the concept of “data colonialism” adequately captures the dynamics of the digital society in areas of well-entrenched digital divides.
- Shokooh Valle, F. (2020). Turning fear into pleasure: Feminist resistance against online violence in the global south. Feminist Media Studies. https://doi.org/10.1080/14680777.2020.1749692
- This essay argues that feminist strategies of contestation to online violence in the Global South embody decolonial thought by re-appropriating and fostering the right of marginalized communities to express sexual pleasure online. The author asserts that activists problematize online violence through two main strategies: first, by anchoring themselves in a southern epistemology that makes explicit the connections between gender-based online violence and broader sociotechnical, historical, and political contexts;and, second, by using activism against online violence, including threats of violence, to advocate for novel forms of online sexual agency and pleasure. Finally, the author describes how feminist activists reimagine a technological future that is truly emancipatory.
- Sun, Y., & Yan, W. (2020). The power of data from the global south: Environmental civic tech and data activism in China. International Journal of Communication, 14(19), 2144-2162.
- This article explores how an established environmental nongovernmental organization, the Institute of Public and Environmental Affairs (IPE), engaged in data activism around a civic tech platform in China, expanding the space for public participation. By conducting participatory observation and interviews, along with document analysis, the authors describe three modes of data activism that represent different mechanisms of civic oversight in the environmental sphere.
- Taylor, L., & Broeders, D. (2015).* In the name of development: Power, profit and the datafication of the global South. Geoforum, 64, 229-237. http://dx.doi.org/10.1016/j.geoforum.2015.07.002
- This article identifies two trends in the datafication process underway in low- and middle-income countries (LMICs): first, the empowerment of public–private partnerships around datafication in LMICs and the consequently growing agency of corporations as development actors. Second, the way commercially generated big data is becoming the foundation for country-level ‘data doubles’, i.e. digital representations of social phenomena and/or territories that are created in parallel with, and sometimes in lieu of, national data and statistics. The authors explore the resulting shift from legibility to visibility and the implications of seeing development interventions as a by-product of large-scale processes of informational capitalism.
- West, S. M., et al. (2019).* Discriminating systems: Gender, race, and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html
- This report argues that there is a diversity crisis in the artificial intelligence (AI) industry, and that a profound shift is needed to address this crisis. The authors put forward eight recommendations for improving workplace diversity and four recommendations for addressing bias and discrimination in AI systems.
- Zhang, W., & Neyazi, T. A. (2020). Communication and technology theories from the South: The cases of China and India. Annals of the International Communication Association, 44(1), 34-49.
- Using China and India as case studies, this paper advances three theoretical insights: firstly, the state-society relationship shapes communication technologies; secondly, the increasing pluralization or hybridity of cyberspace shapes how communication technologies are used; and lastly, it is the quest for finding oneself (or selves) in a Chinese/Indian modernity that could provide references to other contexts.
Chapter 32. Perspectives and Approaches in AI Ethics: East Asia (Danit Gal)
- BAAI. (2019, May 28).* Beijing AI principles. https://baip.baai.ac.cn/en?fbclid=IwAR2HtIRKJxxy9Q1Y953H-2pMHl_bIr8pcsIxho93BtZY-FPH39vV9v9B2eY
- This document provides context of the principles proposed as guidelines and initiatives for the research, development, use, governance and long-term planning of AI in Beijing, China.
- Baffelli, E. (2021). The android and the fax: Robots, AI and Buddhism in Japan. In G. Bulian and S. Rivadossi (Eds.), Itineraries of an anthropologist, 249-263. Venice University Press.
- This chapter explores how AI and robotics interact with Japanese Buddhism through the example of Mindar, a robotic manifestation of the bodhisattva Kannon. Among other readings in its scripted performance to visitors, Mindar communicates the emptiness that is inherent to robots (which brings it closer to Buddhahood) as well as the compassion that they lack (a limitation). The author remarks that Japanese visitors tend to perceive Mindar more positively as they are more “machine-loving”, and that its proponents hope Mindar is a step towards Buddhist rituals freed of human flaws.
- Berberich, N., et al. (2020). Harmonizing artificial intelligence for social good. Philosophy & Technology, 33(4), 613-638. https://doi.org/10.1007/s13347-020-00421-8
- This article discusses harmony from different perspectives, with special emphasis on social harmony which is central to East Asian culture. It then explains the challenge of creating harmonizing artificial intelligence (AI) that is both tactful and contributes to social good. Finally, it argues that harmony provides novel perspectives and should be an ethical core principle for AI.
- Carrillo, M. R. (2020). Artificial intelligence: From ethics to law. Telecommunications Policy, 44(6). https://doi.org/10.1016/j.telpol.2020.101937
- This paper discusses the main normative and ethical challenges imposed by the advancement of artificial intelligence. In particular, the author examines the effect on law and ethics created by increasing connectivity and symbiotic interaction among humans and intelligent machines.
- Chen, B., et al. (2020). Containing COVID-19 in China: AI and the robotic restructuring of future cities. Dialogues in Human Geography, 10(2), 238-241. https://doi.org/10.1177/2043820620934267
- Motivated by the COVID-19 pandemic, this paper explores China’s use of robots and AI to ensure physical distancing and enforce quarantines in its cities. The authors also provide discussion on the future impact of such autonomous systems on urban bio-(in)security.
- China Institute for Science and Technology Policy at Tsinghua University. (2018).* China AI Development Report 2018. http://www.sppm.tsinghua.edu.cn/eWebEditor/UploadFile/China_AI_development_report_2018.pdf
- This document, published by the China Institute for Science and Technology Policy (CISTP) within Tsinghua University in Beijing China aims to provide a comprehensive picture of AI development in China and in the world at large with a view to increasing public awareness, promoting the AI industry development, and serving policymaking.
- Cui, D., & Wu, F. (2021). The influence of media use on public perceptions of artificial intelligence in China: Evidence from an online survey. Information Development, 37(1), 45-57. https://doi.org/10.1177/0266666919893411
- This paper conducts a survey to study how the use of different Chinese media leads to positive or negative perceptions of artificial intelligence (AI) in China. It finds that consumption of Chinese media, whose agenda is controlled by an AI-friendly government, generally leads to more positive perceptions of AI. It reports interesting exceptions in that newspaper use leads to more negative perceptions of AI, while high personal relevance of AI to participants partly mitigates the influence of media.
- Dekle, R. (2020). Robots and industrial labor: Evidence from Japan. Journal of the Japanese and International Economies, 58, 101108. https://doi.org/10.1016/j.jjie.2020.101108
- This study explores the impact of robots on the Japanese labor force. The authors found that robots have a negative impact in relation to human task displacement, a positive effect on industry productivity, and an overall positive macroeconomic impact on Japanese labor demands.
- Ema A. (2018).* EADv2 regional reports on A/IS ethics: Japan. The Ethics Committee of the Japanese Society for Artificial Intelligence. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/eadv2_regional_report.pdf
- This document, compiled by the Institute of Electrical and Electronics Engineers (IEEE), consists of reports describing regional attitudes and actions in the field of artificial intelligence.
- Frumer, Y. (2018). Cognition and emotions in Japanese humanoid robotics. History and Technology, 34(2), 157-183.
- This paper analyses the creation of artificial humanoid robots, the phenomenon of the ‘uncanny valley’, and current research to overcome the ‘uncanny’ nature of humanoid robots, to argue that development of the field of humanoid robotics in Japan was driven by concern with human emotion and cognition and shaped by Japanese roboticists’ own associations with the social and intellectual environments of their time.
- Fung, P., & Etienne, H. (2021). Confucius, cyberpunk and Mr. Science: Comparing AI ethics between China and the EU. arXiv:2111.07555.
- This arxiv preprint finds that Chinese artificial intelligence (AI) ethics are more optimistic, objective oriented, and trusting of the state when compared to AI ethics in the EU. It observes that China emphasizes a harmonious society and guidance from a virtuous government, while the EU attempts to assuage individual skepticism of AI with protective rules. The paper concludes that AI ethics from China and the EU are complementary and could be adopted together.
- Ghotbi, N., et al. (2021). Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan. AI & Society, 37, 283-290. https://doi.org/10.1007/s00146-021-01168-2
- The authors conducted a survey to understand college students’ perceptions of artificial intelligence at an international university in Japan. They found that the most significant ethical issue for students was the impact of AI on unemployment. The second most pressing ethical issue was AI’s impact on human behavior. The authors use the results of the study to call on Japan’s policymakers to consider ways to reduce the negative impact of AI on employment and promote greater emotional intelligence in the development of AI systems.
- Gould, H., & Walters, H. (2020). Bad Buddhists, good robots: Techno-salvationist designs for nirvana. Journal of Global Buddhism, 21, 277-294. http://dx.doi.org/10.5281/zenodo.4147487
- This article examines the use of AI and robotics to address human failings in contemporary Buddhism. It highlights how robots can meaningfully replace humans in Buddhist ritual acts, acting as a potential source of labor to combat rising secularism and an aging population in Japan. The article also examines the case of Lotos, a US startup which promises less corrupt and more egalitarian Buddhism with blockchain technology.
- Huang, R., et al. (2021). De-centering the West: East Asian philosophies and the ethics of applying artificial intelligence to music. In Proceedings of the 22nd International Society for Music Information Retrieval Conference (pp. 301-309). ISMIR. https://doi.org/10.5281/zenodo.5624543
- This paper studies East Asian ethical guidelines for AI in music, involving the topics of music retrieval, analysis, and generation. It does so by relating philosophies from Confucianism, Buddhism, Shintoism, and Daoism to East Asian music ecosystems. By showing how seemingly universal values are culture-specific, the paper emphasizes the importance of intercultural perspectives on AI in music.
- Hwang, H., & Park, M. H. (2020). The threat of AI and our response: The AI charter of ethics in South Korea. Asian Journal of Innovation and Policy, 9(1), 56-78. https://doi.org/10.7545/ajip.2020.9.1.056
- This article describes Korea’s response to the risks created by the use of AI based on the AI Charter of Ethics (AICE) protocol. This paper identifies seven threats that AI poses for Korean society, sorted into three categories: AI’s value judgment, malicious use of AI, and human alienation. The authors also evaluate responses to these threats which they categorize using three themes: protection of social values, AI control, and fostering digital citizenship. The authors found a gap in the Korean response to AI when it comes to the threat of AI taking over human occupations and the use of AI weaponry for military power.
- Intelligent robots development and distribution promotion act. (Act No. 9014, Mar. 28, 2008, Amended by Act No. 9161, Dec. 19, 2008).* Statutes of the Republic of Korea. http://elaw.klri.re.kr/eng_mobile/viewer.do?hseq=17399&type=sogan&key=13
- This statute describes and dictates the South Korean outlook on artificial intelligence and sets in place guidelines on future development in the field of AI.
- Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
- This paper explores the debate concerning what constitutes “ethical AI” and which ethical requirements, technical standards and best practices are needed for its realization. The authors present their findings that there is a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy). However, there is substantive divergence in relation to how these principles are interpreted; why they are deemed important; what issue, domain or actors they pertain to; and how they should be implemented.
- Knox, J. (2020). Artificial intelligence and education in China. Learning, Media and Technology, 45(3), 298-311. https://doi.org/10.1080/17439884.2020.1754236
- This article analyzes the role of government policy and private sector enterprises’ roles concerning AI in the context of education in China. The authors found that while the policy of the central government still maintains a significant influence on the use of AI in education, there are favorable conditions for the private sector to develop applications and begin to dominate AI education markets.
- Kovacic, M. (2018). The making of national robot history in Japan: Monozukuri, enculturation and cultural lineage of robots. Critical Asian Studies, 50(4), 572-590. https://doi.org/10.1080/14672715.2018.1512003
- This article discusses Japanese corporate and governmental strategies and mechanisms that are shaping a national robot culture through establishing robot “lineages” and a national robot history which can have significant implications for both humans and robots.
- Lee, K. J., & Kim, E. Y. (2020). The role and effect of artificial intelligence (AI) on the platform service innovation: The case study of Kakao in Korea. Knowledge Management Research, 21(1), 175-195. https://doi.org/10.15813/kmr.2020.21.1.010
- This paper investigates the use of AI in platform services and its impact on business performance in Korea. The authors conducted an empirical study of the Kakao group, focusing on its three subsidiary platforms: the chatbot agent of Kakao Bank, the smart call service of Kakao Taxi, and the music recommendation system of Kakao Mellon. They found that the use of these AI driven platform services has provided for a significant decrease in transaction costs and personalization services.
- Lim, T. W. (2019). North Korea’s artificial intelligence (AI) program. North Korean Review, 15(2), 97-103. https://www.jstor.org/stable/26915828
- This paper comments on North Korea’s significant AI talent, which has produced technologies like the renowned Go playing algorithm Eun-Byul and home applications with voice recognition. It also finds that the North Korean AI program has distinct goals of creating a source of national pride and praise for the Supreme Leader. The essay concludes that AI development is possible but severely hampered under the insularity and heavy international sanctions that deny North Koreans heavy computing power, technical expertise, and big data.
- Mao, Y., & Shi-Kupfer, K. (2021). Online public discourse on artificial intelligence and ethics in China: Context, content, and implications. AI & Society, 1-17.
- This article studies ethics of artificial intelligence (AI) discourse on the Chinese social media platforms WeChat and Zhihu. It observes that WeChat contains expert policy recommendations for more state control, while Zhihu exhibits public anxiety towards adapting and competing with AI in the job market. The article also finds that although Chinese discussions are generally well-informed, they lack enthusiasm for international cooperation on the ethics of AI.
- Obayashi, K. et al. (2020). Can connected technologies improve sleep quality and safety of older adults and care-givers? An evaluation study of sleep monitors and communicative robots at a residential care home in Japan. Technology in Society, 62, 101318. https://doi.org/10.1016/j.techsoc.2020.101318
- This study explores the use of an assistive technology that is connected to a communicative robot to monitor the sleep quality and safety of older adults. The system was then evaluated in a study with both senior adults and caregivers at a nursing home in Japan.
- Otsuki, G. J. (2019). Frame, game, and circuit: Truth and the human in Japanese human-machine interface research. Ethnos. https://doi.org/10.1080/00141844.2019.1686047
- This essay tracks the ‘human’ emergent in human-centered technologies (HCTs) in Japan. It argues that all HCTs are systems of information and the right machine can approach humanity enough to fulfill even the most human of responsibilities.
- Park, Y. R., & Shin, S. Y. (2017). Status and direction of healthcare data in Korea for artificial intelligence. Hanyang Medical Reviews, 37(2), 86-92.
- This paper argues that in the context of medical AI, the general approach that accumulates massive amounts of data based on existing big data concepts cannot provide meaningful results in the healthcare field. Thus, the authors argue that well-curated data is required in order to provide a successful combination of AI and medical care.
- Peters, D., et al. (2020). Responsible AI—Two frameworks for ethical design practice. IEEE Transactions on Technology and Society, 1(1), 34-47.
- This paper presents two complementary frameworks for integrating ethical analysis into engineering practice to address the challenge posed by unintended consequences of artificial intelligence (AI), compounded by the lack of an anticipatory process for attending to ethical impact within professional practice.
- Roberts, H., et al. (2021). The Chinese approach to artificial intelligence: An analysis of policy and regulation. AI & Society, 36, 59–77. http://dx.doi.org/10.2139/ssrn.3469784
- Through a compilation of debates and analyses of Chinese policy documents, this paper investigates the socio-political background and policy debates that are shaping China’s AI strategy. There is a focus on China’s main strategic areas for AI investment and the concurrent ethical debates that are delimiting its use.
- Robertson, J. (2018).* Robo sapiens japanicus: Robots, gender, family, and the Japanese nation. University of California Press.
- Through an analysis of press releases and public relations videos, this book provides academic discourse of human-robot relations in Japan, and ultimately argues that robots in Japan —humanoids, androids, and animaloids—are “imagineered” in ways that reinforce the conventional sex/gender system and political-economic status quo.
- Robertson, J. (2018). Robot reincarnation: Rubbish, artefacts, and mortuary rituals. In K. J. Cwiertka & E. Machotka (Eds.), Consuming Life in Post-Bubble Japan: A Transdisciplinary Perspective, 153-173.
- This chapter explores the phenomenon of funerals for robots and computers in Japan. It explains that robot reincarnation and funerals are justified by Japanese Buddhist traditions, which deem all objects and entities as spiritual and part of the reincarnation process. The article further observes that these relatively new practices fuel and reinvigorate the Japanese religious service industries.
- Sethu, S. G. (2019). The inevitability of an international regulatory framework for artificial intelligence. In 2019 International Conference on Automation, Computational and Technology Management (ICACTM) (pp. 367-372). IEEE. https://doi.org/10.1109/ICACTM.2019.8776819
- This paper highlights issues surrounding the manufacture and functioning of autonomous weapons, specifically in the Lethal Autonomous Weapons System (LAWS) as a reason for the need to establish the need for an international regulatory framework for artificial intelligence.
- Sparrow, R. (2019). Robotics has a race problem. Science, Technology, & Human Values, 45(3), 538-560.
- This article presents research that shows people are inclined to attribute race to humanoid robots, resulting in an ethical problem that designers of social robots must confront. Thus, the author argues that the only way engineers might avoid this dilemma is to design and manufacture robots to which people will struggle to attribute race. Notably, this would require rethinking the relationship between robots and “the social,” which sits at the heart of the project of social robotics.
- The IEEE global initiative on ethics of autonomous and intelligent systems. (2019).* Classical Ethics in A/IS. In Ethically Aligned Design (pp. 36-67). https://standards.ieee.org/industry-connections/ec/autonomous-systems.html
- This document released by the Institute of Electrical and Electronics Engineers (IEEE) is a crowdsourced global treatise for ethical development in Artificial and Intelligent Systems. The chapter Classical Ethics in A/IS draws from classical ethical principles to outline guidelines and limitations on AI systems.
- Weng, Y. H., et al. (2019). The religious impacts of Taoism on ethically aligned design in HRI. International Journal of Social Robotics, 11(5), 829-839. https://doi.org/10.1007/s12369-019-00594-z
- This paper explores the increasing importance of assessment of robot application and employment in different countries with different cultural backgrounds and focuses on the intersection of religion and automation. This paper aims to analyze what impacts Taoist religion may have on the use of ethically aligned design in future human–robot interaction.
- Wu, F., et al. (2020). Towards a new generation of artificial intelligence in China. Nature Machine Intelligence, 2(6), 312-316. https://doi.org/10.1038/s42256-020-0183-4
- This article introduces the New Generation Artificial Intelligence Development Plan of China (2015–2030) which outlines the country’s strategy for using technology in science and education. The plan also identifies the challenges in talent retainment, fundamental research advancement, and ethical implications. The authors assert that the plan is intended as a blueprint for a future AI ecosystem in China.
- Wu, W., et al. (2020). Ethical principles and governance technology development of AI in China. Engineering, 6(3), 302-309. https://doi.org/10.1016/j.eng.2019.12.015
- This article provides a survey on the efforts towards the development of AI ethics and governance in China. It highlights the preliminary outcomes of these efforts and describes the major research challenges that lay ahead in AI governance.
- Yang, X. (2019). Accelerated move for AI education in China. ECNU Review of Education, 2(3), 347-352. https://doi.org/10.1177/2096531119878590
- This paper reviews several key policies put forward by the Chinese government in order to analyze recent efforts to promote education on AI. The author found that AI education is already prevalent in many areas of the education system, starting at the elementary level and becoming more robust at the senior level in civic education curriculums.
- Yoo, J. (2015).* Results and outlooks of robot education in Republic of Korea. Procedia-Social and Behavioral Sciences, 176, 251-254. https://doi.org/10.1016/j.sbspro.2015.01.468
- This paper explores the consequences of the introduction of robotics into the South Korean education system from elementary through to high school, compared to the later introduction at post-secondary level in the United States or Japan. The author then evaluates the results of this policy in context of future prospects in South Korea and argues that this early introduction gives South Korea a head start in the robotics industry.
- Zeng, Y., et al. (2018).* Linking artificial intelligence principles. arXiv:1812.04814
- This paper argues that although Artificial Intelligence principles define social and ethical considerations to develop future AI, there exist multiple versions of AI principles, with different considerations covering different perspectives and making different emphasis. Thus, the authors propose Linking Artificial Intelligence Principles (LAIP), an effort and platform for linking and analyzing different Artificial Intelligence Principles.
- Zhang, B. T. (2016). Humans and machines in the evolution of AI in Korea. AI Magazine, 37(2), 108-112.
- This article recounts the evolution of AI research in Korea, and describes recent activities in AI, along with governmental funding circumstances and industrial interest.
- Zhu, Q., et al. (2020). Blame-laden moral rebukes and the morally competent robot: A Confucian ethical perspective. Science and Engineering Ethics, 26(5), 2511-2526.
- This article argues that robots should rebuke humans for violations of shared norms. It explains how, from a Confucian perspective, such human-robot interaction is important for the morality of both humans and robots. It then discusses how Confucian role-based ethics could inform the design of such socially integrated and morally competent robots.
Chapter 33. Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion (Nagla Rizk)
- Access Partnership. (2018).* Artificial intelligence for Africa: An opportunity for growth, development, and democratisation. https://www.accesspartnership.com/artificial-intelligence-for-africa-an-opportunity-for-growth-development-and-democratisation/
- This report argues that the development of artificial intelligence technologies can solve problems that impact Sub-Saharan African countries, providing growth and development in areas such as agriculture, healthcare, and public service.
- Ahmed, S. M. (2019). Artificial intelligence in Saudi Arabia: Leveraging entrepreneurship in the Arab markets. In 2019 Amity International Conference on Artificial Intelligence (AICAI) (pp. 394-398). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/AICAI.2019.8701348
- This paper focuses on efforts toward economic diversification in Saudi Arabia. AI has been embraced by many established industry sectors in Saudi Arabia, such as banking and finance, but the author suggests AI is poised for growth in the Saudi start-up ecosystem. The author argues that fostering AI entrepreneurship will promote economic diversification, create wealth, and catalyze social change in Saudi Arabia and the Middle East.
- AI Now Institute, New York University. (2018).* AI Now Report 2018. https://ainowinstitute.org/AI_Now_2018_Report.pdf
- The 2018 AI Now Institute report focuses on five key issues. First, the accountability gap in AI, which favours AI producers rather than the people these technologies are used against. Second, how AI is used to increase surveillance, such as the increased use of facial recognition. Third, government use of emerging technology without pre-existing accountability frameworks. Fourth, the lack of regulation of AI experimentation on human subjects. Fifth, the failure of current solutions in addressing fairness, bias, and discrimination.
- Al-Din, S. G. (2021). Implications of the fourth industrial revolution on women in information and communications technology: In-depth analysis on the future of work. Egyptian National Council for Women. http://en.enow.gov.eg/Report/133.pdf
- The author flags both the economic potential and risks that are expected to accompany the automation of low-skilled labor in Egypt. They note that despite relatively high levels of access to education and health services, women are more likely to work in sectors that are at risk of automation. With this in mind, they propose building public awareness of these gendered risks among Egyptians, investing in education as well as entrepreneurship opportunities for women, and expanding the social security system with the gendered impacts of automation in mind.
- Al-Eisawi, D. (2020). A framework for responsible research and innovation in new technological trends towards MENA region. In 2020 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC), (pp. 1–8). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/ICE/ITMC49519.2020.9198506
- While responsible research and innovation (RRI) has attracted significant attention in Europe, the author suggests that the framework requires conceptual expansion to become meaningful for research in the Middle East and North Africa (MENA) region. Drawing upon interviews with technology researchers, the author presents a grounded theory framework for RRI in the MENA region with emphasis on access, education, ethics, and engagement for the promotion of innovation that is rights-respecting and inclusive in the region.
- Al-Roubaie, A., & Alaali, M. (2020). The fourth industrial revolution: Challenges and opportunities for MENA region. In A. E. Hassanien, A. Azar, T. Gaber, D. Oliva, & F. Tolba (Eds.), Joint European-US Workshop on Applications of Invariance in Computer Vision (pp. 672-682). Springer. https://doi.org/10.1007/978-3-030-44289-7_63
- The authors argue that the disruptions caused by artificial intelligence are deepening unemployment and inequality in the Middle East and North Africa region. They show that automation stands to deepen existing economic imbalances both between and within the region’s economies. They suggest that investments in digital government, research and development, and education should be made to promote the development of an inclusive digital economy.
- Arezki, R., et al. (2018).* Middle East and North Africa economic monitor, Spring 2018: Economic Transformation. The World Bank. https://openknowledge.worldbank.org/bitstream/handle/10986/30436/9781464813672.pdf?sequence=11&isAllowed=y
- This report examines the development and use of a digital economy in the Middle East and North Africa region, discussing how it would create jobs for millions of unemployed young people in the coming years. To do this, the authors argue the MENA region must move away from its focus on manufacturing exports, and instead take advantage of the region’s educated youth population, encouraging innovation and entrepreneurship.
- Badran, M. (2019). Bridging the gender digital divide in the Arab Region. International Development Research Centre. https://www.researchgate.net/profile/Mona-Badran-2/publication/330041688_Bridging_the_gender_digital_divide_in_the_Arab_Region/links/5c2b725b92851c22a3535465/Bridging-the-gender-digital-divide-in-the-Arab-Region.pdf
- This report shows that gender inequality in the Middle East and North African technology sector imposes costs on society through missed economic potential. Moreover, automation driven by artificial intelligence is most likely to impact sectors with high levels of female employment. The author argues that access to technology is a barrier to equality and that more effort should be directed towards technical mentorship and education for women already in the labor force to improve their capacity to adapt and benefit from new technologies.
- Barrett, L. (2020). Ban facial recognition technologies for children and for everyone else. Boston University Journal of Science and Technology Law, 26(2), 223-285.
- This article details the ways in which Israeli technology companies developed facial recognition on children crossing checkpoints and how that technology is now being exported to be used in school districts in low-resource districts in the United States. The author also calls for the ban of facial recognition used on children and argues that there is no way to ethically use facial recognition on vulnerable children.
- Brynjolfsson, E., & McAfee, A. (2011).* Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Digital Frontier Press.
- This book argues that the average human worker cannot keep up with cutting edge technologies such as AI that have the potential to take over their jobs. The implication is that poor employment prospects are not due to lack of advancements, but rather because we are being outdone by technology.
- Brynjolfsson, E., et al. (2017). Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics. National Bureau of Economic Research. www.nber.org/chapters/c14007.pdf
- This article argues that although there have been many advancements in AI technology in past years, this has not been met with an increase in productivity. The authors explore four potential explanations for this apparent paradox: false hopes, statistically mismeasurement, redistribution, and lags in implementation.
- Butcher, J., & Beridze, I. (2019). What is the state of artificial intelligence governance globally? The RUSI Journal, 164(5), 88-96. https://doi.org/10.1080/03071847.2019.1694260
- The authors summarize current AI governance in both public and private sectors, in research organizations, and at the United Nations. They offer frameworks that can provide guidance to policy makers.
- Chui, M., et al. (2017).* The countries most (and least) likely to be affected by automation. Harvard Business Review. https://hbr.org/2017/04/the-countries-most-and-least-likely-to-be-affected-by-automation
- This article examines the automation potential in 46 countries, accounting for 80% of the global workforce. The authors present a wide disparity in terms of automation risk among states in the Middle East and North African region with North African states like Morocco and Egypt at much higher risk than Persian Gulf states like Kuwait and Saudi Arabia.
- Cihon, P. (2019). Standards for AI governance: International standards to enable global coordination in AI research & development. Future of Humanity Institute.
- This report argues that the emergence of AI presents novel problems for policy design, and that a coordinated global response is necessary. Current AI standards development is heavily focused on market efficiency and addressing global concerns, but the authors argue that this neglects further policy objectives such as creating a culture of responsibility.
- Cisse, M. (2018). Look to Africa to advance artificial intelligence. Nature, 562(7728), 461-462.
- The authors argue that AI technology must be developed in a broader range of locations than just Asia, North America and Europe, in order to promote diversity and combat unintended biases. Particularly, development in Africa should be prioritized, as this would not only solve the problem of lack of diversity, but also would provide Africans with access to technology that could improve the lives of citizens.
- Daly, A., et al. (2019). Artificial intelligence, governance and ethics: Global perspectives. The Chinese University of Hong Kong Faculty of Law Research Paper, (2019-15). https://dx.doi.org/10.2139/ssrn.3414805
- This report provides an overview on how actors such as governments and private corporations have approached AI regulation and ethics, including regions such as China, Europe, India, and the United States, and companies such as Microsoft.
- Dihal, K., et al. (2021). Imagining a future with intelligent machines: A Middle Eastern and North African perspective. Cambridge: The Leverhulme Centre for the Future of Intelligence. https://doi.org/10.17863/CAM.75296
- This paper discusses the views surrounding AI in the Arab regions of the MENA. The authors find that people in this region hope for more grounds-up technological initiatives, as opposed to the top-down ones often put in place by the government. There are also questions about how automation will impact jobs, especially as there is already concern regarding inequality without the introduction of AI. Further, narratives surrounding AI and other technologies have significant influence in the region, although it is less explicit than in the West.
- Ermağan, İ. (2021). Worldwide artificial intelligence studies with a comparative perspective: How ready is Turkey for this revolution? Artificial Intelligence Systems and the Internet of Things in the Digital Era, 500–512. Springer International Publishing. https://doi.org/10.1007/978-3-030-77246-8_46
- This paper discusses Turkey’s progress in AI development compared to other countries around the world. The author suggests that in order for Turkey to meet its goal of becoming an AI leader it must invest more money into research and development, provide opportunities and resources for primary and secondary school students to learn how to code, and partner with other countries to help fund these initiatives and share their expertise.
- Fatafta, M., & Samaro, D. (2021). Exposed and exploited: Data protection in the Middle East and North Africa. Access Now. https://apo.org.au/node/310911
- This report explores the tensions between weak data protection regulations and the rapid adoption of data-driven surveillance technologies, which disproportionately impact marginalized populations in Jordan, Lebanon, Palestine, and Tunisia. The authors describe each territory’s data protection regime alongside surveillance case studies, such as the use of facial recognition in occupied Palestine. They conclude with data protection policy recommendations for states, firms, and international organizations operating in the region.
- Giovannetti, G., & Vivoli, A. (2018). Technological innovation: Growth without occupation. An overview on MENA Countries. In IEMed: Mediterranean yearbook (pp. 278-282). https://www.iemed.org/observatori/arees-danalisi/arxius-adjunts/anuari/med.2018/Technological_Innovation_Giovannetti_Vivoli_Medyearbook2018.pdf
- The authors argue that the Middle East and North Africa (MENA) region is vulnerable to automation due to labor-intensive economies and low integration with the international technology sector. ‘Low-skill’ jobs occupied by women are at the greatest risk of automation as despite outperforming men in schools, women are under-represented in technical jobs. They argue that investing in the MENA region’s youth by stimulating the technology sector and strengthening social security can insulate the region from the negative impacts of automation.
- Heath, V. (2021). Women and the fourth industrial revolution: An examination of the UAE’s national AI strategy. In Artificial Intelligence in the Gulf, 203–245. Springer Singapore. https://doi.org/10.1007/978-981-16-0771-4_10
- This paper uses the UAE’s Strategy for Artificial Intelligence as a case study looking at the inclusion of women in the development and governance of AI. The author finds that there is no explicit mention of including women in the AI strategy, and that very few women make up the UAE Council for AI and speakers at AI-related conferences hosted by the government. Further, the author looks at reasons for the disproportionate number of women in the STEM workforce in the UAE, despite women making up the majority of students obtaining STEM degrees.
- Gordon, M. (2018) Forecasting instability: The case of the Arab spring and the limitations of socioeconomic data. Wilson Center. https://www.wilsoncenter.org/article/forecasting-instability-the-case-the-arab-spring-and-the-limitations-socioeconomic-data
- The author analyzes data from the Arab Spring, arguing that these uprisings could be predicted, but not down to the exact date and time of their occurrence. They argue that similar limitations apply to predicting political and social instability.
- Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
- This study investigates whether or not there is a global consensus on any ethical principles pertaining to AI. The results reveal global convergence around five principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy.
- Lukonga, M. I. (2018). Fintech, inclusive growth and cyber risks: Focus on the MENAP and CCA regions (IMF Working Paper No. 18/201). International Monetary Fund. https://www.imf.org/-/media/Files/Publications/WP/2018/wp18201.ashx
- The author argues that the financial technology (fintech) industry in the Middle East, North Africa, Afghanistan, and Pakistan as well as the Caucasus and Central Asia regions can increase financial inclusion and promote more equitable economic growth in the region. There are a high number of ‘unbanked’ citizens in these regions and fintech is flexible enough to reach populations that traditional banks have not been able to. Lukonga suggests modernizing regulations to enable the expansion of fintech and promote inclusive growth.
- Rickli, J.-M. (2018). The economic, security and military implications of artificial intelligence for the Arab Gulf countries. Emirates Diplomatic Academy. https://eda.ac.ae/docs/default-source/Publications/eda-insight_ai_en.pdf
- In this report, the author discusses the expected impact of AI on countries in the Arab Gulf. The report notes that interventions in training and education are essential to mitigate the social impacts of automation and develop a strong AI industry. Further to this, the author suggests that leadership in the AI industry is essential for the maintenance of national security due to the development of autonomous weapons in addition to the rising prevalence of cyber-attacks and AI-generated disinformation.
- Rizk, N. Y. H., & Salem, N. (2018). Open data management plan Middle East and North Africa: A guide. MENA Data Platform. https://menadata.net/public/dataset/81539549369
- This guide contains three documents developed out of the American University in Cairo. First, a background paper explores open data relating to research and development. Second is a data management plan template, made up of a set of questions that, when answered, will provide an Open Data Management plan. Third is the Solar Data Platform Open Data Management Plan, which mapped solar energy in Egypt, and acts as an example of the implementation of the template.
- Talbot, R. (2020). Automating occupation: International humanitarian and human rights law implications of the deployment of facial recognition technologies in the occupied Palestinian territory. International Review of the Red Cross, 102(914), 823-849.
- This chapter details the ways Israel uses facial recognition technologies in occupied territories and the legal implications of ongoing surveillance. The author seeks to understand the privacy considerations that need to be made in order to develop a holistic international legal framework that would protect civilians.
- Vernon, D. (2019). Robotics and artificial intelligence in Africa [Regional]. IEEE Robotics & Automation Magazine, 26(4), 131-135. https://doi.org/10.1109/MRA.2019.2946107
- This article explores how African countries can take advantage of opportunities presented by the rise of artificial intelligence and robots, considering potential solutions to problems that are likely to emerge. The author argues that to take full advantage of potential growth, states should create an enabling environment for advanced research and education and that vendors should work to lower the costs of AI and robotics technology to encourage adoption by African firms.
- World Economic Forum. (2019).* Dialogue series on new economic and social frontiers, shaping the new economy in the fourth industrial revolution. http://www3.weforum.org/docs/WEF_Dialogue_Series_on_New_Economic_and_Social_Frontiers.pdf
- This paper examines four emerging challenges at the intersection of economics, technology, and society in the age of the Fourth Industrial Revolution. The paper addresses multiple areas of concern, such rethinking economic value and avenues for creating this value, addressing market concentration, enhancing job creation, and revising social protection.
- Youssef, A. B. (2021). Digital transformation in Tunisia: Under which conditions could the digital economy benefit everyone? Economic Research Forum. https://doi.org/https://erf.org.eg/app/uploads/2021/11/1637495187_570_734598_1512.pdf
- This paper discusses current advancements and challenges in technological development in Tunisia. The increase in the use of digital technologies in Tunisia has increased the digital divide, as some still do not have access to stable Internet, and so cannot access a lot of resources that are only available online. Further, even when Internet is available, some lack the necessary skills to access and use these resources. The author provides some recommendations for how Tunisia should move forward in terms of digitization, including reducing the digital gender divide and helping people develop their e-skills.
- World Economic Forum. (2017).* The future of jobs and skills in the Middle East and North Africa: Preparing the region for the fourth industrial revolution. https://www.weforum.org/reports/the-future-of-jobs-and-skills-in-the-middle-east-and-north-africa-preparing-the-region-for-the-fourth-industrial-revolution
- This report asserts that it is vital that the MENA region invest in education to prepare its young population for the contemporary labour market. It presents a call to action to MENA region leaders to ensure that youth are able to fully participate in the global economy.
- Yamakami, T. (2019). From ivory tower to democratization and industrialization: A landscape view of real-world adaptation of artificial intelligence. In International Conference on Network-Based Information Systems (pp. 200-211). https://doi.org/10.1007/978-3-030-29029-0_19
- The author examines the concept of democratization and industrialisation of deep learning as a new landscape view for artificial intelligence. They go on to describe a three-stage model of interaction between a social community and technology.
Chapter 34. Europe’s Struggle to Set Global AI Standards (Andrea Renda)
- Andraško, J., et al. (2021). The regulatory intersections between artificial intelligence, data protection and cyber security: Challenges and opportunities for the EU legal framework. AI & Society, 36(2), 623–636. https://doi.org/10.1007/s00146-020-01125-5
- This paper deals with issues and questions relating to privacy, personal data, and cyber security that are included in key documents relating to the EU legal framework on AI ethics. It primarily focuses on topics of personal data protection and cybersecurity within these documents, arguing that as the amount of data within the EU increases, potential risks must be recognized and mitigated and that individuals, rights, and freedoms should be central to the legal debates over regulation, becoming the “primary object of protection.”
- Annoni, A., et al. (2018).* Artificial intelligence: A European perspective. Joint Research Centre, European Commission. https://doi.org/10.2760/11251
- This extensive report investigates the multitude of practical, technical, legal, and ethical issues that the EU must consider when developing laws, policies, and regulations regarding AI, data protection, and cybersecurity. The researchers propose that the EU must take a unified approach to encourage developments in AI that are socially driven, responsible, ethical, and match the core values of civil society.
- Antonov, A., & Kerikmäe, T. (2020). Trustworthy AI as a future driver for competitiveness and social change in the EU. In D. Ramiro Troitiño, T. Kerikmäe, R. de la Guardia, & G. Pérez Sánchez (Eds.), The EU in the 21st century (pp.135-154). Springer. https://doi.org/10.1007/978-3-030-38399-2_9
- This article examines the ethical and legal effects of AI technologies that have been promoted and encouraged by the EU in recent years. The authors consider key initiatives in AI governance and seek to identify the main challenges that the EU will face in their goal to become a global leader in the development of trustworthy AI technology.
- Braun, M., et al. (2021). A leap of faith: Is there a formula for “trustworthy” AI? The Hastings Center Report, 51(3), 17-22. https://doi.org/10.1002/hast.1207
- This report points to a few flawed views of the nature of trust in AI. They suggest that trust in AIs is granted through ‘leaps of faith’, which is dangerous and fragile. They also point to the positive aspects of distrust, which give humans justifications to exercise control over AIs.
- Calzada, I. (2019). Technological sovereignty: Protecting citizens’ digital rights in the AI-driven and post-GDPR algorithmic and city-regional European realm. Regions eZine. https://ssrn.com/abstract=3415889
- This article explains how the state of AI and data protection regulation in the EU affect citizenship. The author takes a comparative approach and argues that in the EU, citizens are considered to be decision-makers rather than data providers (as is the case in the US and China). The author argues that Europe is most likely to adopt a form of ‘technological humanism’ by offering strategic visions of regional AI networks in which governments maintain technological sovereignty to protect their citizens’ digital rights.
- Carriço, G. (2018). The EU and artificial intelligence: A human-centred perspective. European View, 17(1), 29-36. https://doi.org/10.1177/1781685818764821
- This article considers the costs and benefits of AI implementation in the EU context and argues in support of developing the EU into a global leader of AI innovation. The author argues for a human-centric focus on AI development and emphasizes the use of AI to solve the world’s most challenging societal problems while minimizing risk. The author provides policy recommendations for EU adoption to realize this goal.
- Cath, C., et al. (2018).* Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505-528. https://doi.org/10.1007/s11948-017-9901-7
- This paper provides a comparative analysis of policy plans proposed by US, UK, and EU governments concerning the integration of AI in society. The authors argue in favor of ‘the good AI society’, and they suggest that although short-term ethical solutions are important, state actors in the US, EU, and UK must consider long-term visions and strategies that best promote human flourishing and dignity in the AI context.
- Cho, J. H., et al. (2016). Metrics and measurement of trustworthy systems. In MILCOM 2016-2016 IEEE Military Communications Conference (pp. 1237-1242). Institute of Electrical and Electronics Engineers.
- This study develops a trustworthiness metric that incorporates different factors such as hardware, software, network, and human factors that affect the trustworthiness of computer systems. It focuses on three submetrics: trust, resilience, and agility. Finally, the authors propose an ontology with sub-ontologies to enable measurement of these submetrics and the general trustworthiness of the computer system.
- Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review, 16(2), 18-84.
- This article argues that the right to an explanation as part of General Data Protection Regulation (GDPR) is unlikely to adequately remedy the potential for harm created by the use of algorithms. The authors discuss the gap between the legal right to an explanation and the explanations that machine learning models can provide. Finally, while the right to an explanation may not fulfill its intended goals, the authors discuss how other aspects of GDPR such as the right to be forgotten and privacy by design have greater potential to help make algorithms more responsible, explainable, and human-centric.
- European Commission. (2018).* Coordinated plan on artificial intelligence. https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX:52018DC0795
- This communication from the European Commission proposes a plan aimed at coordinating the integration, facilitation, and development of AI across the EU. The report suggests that in order to become a world leader in the AI industry, the EU must increase investments in AI, prepare for socio-economic change, and develop an ethical and legal framework that ensures AI development is human-centric.
- European Commission & High Level Expert Group on AI. (2019).* Ethics guidelines for trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation
- This report proposes seven ethical principles of trustworthy AI which aim to promote an accountable, human-centric AI for the EU and global contexts. It defines trustworthy AI as that which operates within the law, adheres to ethical principles, and is robust such that no unintentional harms are inflicted on society. The report proposes that policymakers must work to ensure that each of these components are simultaneously met.
- European Commission & High Level Expert Group on AI. (2019).* Policy and investment recommendations for trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence
- This report follows and supports the European Commission’s guidelines for trustworthy AI and provides thirty-three recommendations to maximize the sustainability, growth and competitiveness of trustworthy AI in the EU. The report stresses the role of EU institutions and member states as critical to the implementation of sound AI governance that promotes benefits and minimizes harm to the public. Suggestions are forwarded with regards to data protection, skills and education, regulation and funding of AI technologies.
- European Commission. (2018).* Working document on liability for emerging digital technologies. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52018SC0137&from=en
- This document considers how opportunities and investments in AI can be stimulated by adapting and implementing clear legal frameworks that benefit AI innovators and consumers. The report focusses on the liability challenges in AI and digital technology contexts. The commission calls for an examination of existing safety and liability rules at EU and national levels to determine whether they maintain the appropriate legal certainty required for AI innovation to succeed.
- European Group on Ethics in Science and New Technologies. (2018).* Statement on artificial intelligence, robotics and ‘autonomous’ systems. https://doi.org/10.2777/531856
- This statement by the European Group on Ethics considers the legal, ethical, moral, and societal questions posed by autonomous technologies, and calls for a more collective and inclusive approach among EU member-states. The report proposes a set of ethical imperatives for autonomous systems that is based on the EU treaties and charters of fundamental rights.
- Fazelpour, S., & Lipton, Z. C. (2020). Algorithmic fairness from a non-ideal perspective. In A. Markham, J. Powles, T. Walsh, & A. L. Washington (Eds.), Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 57-63). Association for Computing Machinery. https://doi.org/10.1145/3375627.3375828
- This paper examines the statistical parity definitions of fairness in machine learning from the ideal and non-ideal perspectives of political philosophy. They show there is a connection between ongoing issues in the fair machine learning community and the broader issues that are faced by the ideal perspective community.
- Floridi, L. (2019).* Establishing the rules for building trustworthy AI. Nature Machine Intelligence, 1(6), 261-262. https://doi.org/10.1007/s11023-018-9482-5
- This article provides a defense of the ethical guidelines proposed by the European Commission’s report on trustworthy AI on the grounds that the guidelines establish a benchmark for which responsible design and international support of human-centric AI solutions can be evaluated.
- Floridi, L. (2018). Soft ethics, the governance of the digital and the general data protection regulation. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133). https://doi.org/10.1007/s13347-018-0303-9
- This article considers the challenges of digital governance and provides a framework of ‘hard’ and ‘soft’ ethics as they relate to digital legislation in the EU. The author then provides an analysis of how this ethical framework works with the development of new, and the adaptation of old, regulation and legislation to assist in digital governance.
- Floridi, L., et al. (2018).* AI4People white paper: Twenty recommendations for an ethical framework for a good AI society. Minds and Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5
- This article reports the results of the ‘AI4People’ initiative that was designed to formulate an ideal of the ‘good society’ in an AI context. The report analyses the risks and opportunities of societal AI integration and proposes five ethical principles: four of which are drawn from the applied ethics field of bioethics. The report also offers twenty additional recommendations for policy makers which they believe if adopted, would establish a ‘good AI society’.
- Gardner, A., et al. (2021). Ethical funding for trustworthy AI: Proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice. AI and Ethics, 1-15. https://doi.org/10.1007/s43681-021-00069-w
- The authors are critical of the continuing fiscal support of AI systems that have been shown to be beholden to significant bias and unexpectedly low accuracy – problems that can be detrimental to both the individual and society. Centering their research around the funding process for AI research grants, they highlight the responsibilities of funding bodies in this process and go on to suggest two proposals for these bodies to consider. The first is the addition of a ‘Trustworthy AI Statement’ section in the grant application form, and the second is a series of requirements of the funding bodies to review the proposed projects. The goal is to get applicants to stop and think about their projects during the planning and application stages, as opposed to later, when it is too late.
- Garg, S., et al. (2020). Formalizing data deletion in the context of the right to be forgotten. In A. Canteaut & Y. Ishai (Eds.), Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques (pp. 373-402). Springer. https://doi.org/10.1007/978-3-030-45724-2_13
- The authors provide a formal model of deleting data from machine learning algorithms in accordance with the “right to be forgotten” from General Data Protection Regulation (GDPR). Using techniques from cryptography, they formalize what is possible and what regulators can expect from organizations who need to delete some or all of an individual’s data and its usage in any algorithms.
- Hacker, P. (2018). Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law. Common Law Market Review, 55, 1143–1186. https://ssrn.com/abstract=3164973
- This article considers the discriminatory threat imposed by AI applications against protected groups in the EU legal context and argues that this raises complex questions for labor laws in the EU. As explained, existing anti-discrimination laws are not adapted to AI decision-making and issues of proof in the AI context. The article offers a vision of data protection and anti-discrimination law that enforces fairness in algorithmic decision-making.
- Hickman, E., & Petrin, M. (2021). Trustworthy AI and corporate governance: The EU’s ethics guidelines for trustworthy artificial intelligence from a company law perspective. European Business Organization Law Review, 22(4), 593–625. https://doi.org/10.1007/s40804-021-00224-0
- Since the EU guidelines deal with ethical principles rather than laws, their practical implication on corporate governance is unclear. Therefore, the authors argue that a more granular approach is needed in order to determine how the aforementioned guidelines will inform rules regarding company law and governance principles.
- Humerick, M. (2018). Taking AI personally: How the EU must learn to balance the interests of personal data privacy & artificial intelligence. Santa Clara High Technology Law Journal, 34(4), 393-418. https://digitalcommons.law.scu.edu/chtlj/vol34/iss4/3
- This article considers the influx of AI technology use and its relation to consumer data privacy and protection. The article observes how the EU maintains the most comprehensive regulation for data protection in the world but argues that such strong regulation could discourage future development and innovation of AI in the EU. Unless these issues are addressed, the authors question how future AI developments will thrive in the EU without infringing the provisions of the GDPR.
- Janssen, M., et al. (2020). Data governance: Organizing data for trustworthy artificial intelligence. Government Information Quarterly, 37(3), 101493. https://doi.org/10.1016/j.giq.2020.101493
- This paper discusses how data governance is the foundation of trustworthy AI and provides a framework for strong data governance. The framework, which is based on 13 design principles, encourages stewardship of data, an understanding of the associated risks, models for trusted data sharing between organizations, and stewardship of the algorithms using the data.
- Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://www.nature.com/articles/s42256-019-0088-2.pdf
- This paper reviews a range of guidelines for ethical AI, including the EU’s High-Level Expert Group on Artificial Intelligence and others who call for a ‘trustworthy AI’. Their findings reveal a general consensus on five key principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy. However, how these principles are interpreted, the reasons for their importance, and the process for their implementation remain divergent. The authors also acutely observe that the participation in these debates by the Global South is lacking, leading to an unfair shaping of the discussion that benefits the more economically developed countries of the Global North.
- Kullmann, M. (2018). Platform work, algorithmic decision-making, and EU gender equality law. International Journal of Comparative Labour Law and Industrial Relations, 34(1), 1-21. https://ssrn.com/abstract=3195728
- This article considers the problems that confront workers in the digital economy and examines the role played by algorithms and their biases in employment and hiring processes. The author observes the existing gender disparity in hiring and salary decisions, and questions whether existing EU equality laws are sufficient for protection of workers when employment-related decisions are made by an algorithm.
- Lewis, D., et al. (2020). Global challenges in the standardization of ethics for trustworthy AI. Journal of ICT Standardization, 8(2), 123-150. https://doi.org/10.13052/jicts2245-800X.823
- This study analyzes recent proposals for trustworthy AI from the OECD, the EU, and the IEEE according to their scope and the normative language they use. The authors propose a minimal model to define standards for trustworthy AI, which further standards can build upon. Finally, they examine the current AI standardization initiative taking place at ISO/IEC JTC1 based on their minimal model.
- McMillan, D., & Brown, B. (2019). Against ethical AI. In Proceedings of the Halfway to the Future Symposium 2019 (pp. 1-3).
- This paper considers the EU guidelines on ethical and trustworthy AI to argue against the focus placed on it and other similar principles, guidelines, and manifestos developed for AI. The authors consider how the AI industry and related academia are involved in ‘ethics washing’ and how the development of guidelines may not be as beneficial as previously perceived.
- Mercer, S. T. (2020). The limitations of European data protection as a model for global privacy regulation. AJIL Unbound, 114, 20-25. https://doi.org/10.1017/aju.2019.83
- This article pushes back against the prevailing narrative that EU-style data regulations are becoming a global standard. The author argues that as of 2020, it is too early to determine whether the EU is truly the winner in the race to influence global data protection and privacy law. The author points toward the US as a potential competitor and expects the US regime to differ in its regulatory approach.
- Minkkinen, M., et al. (2021). Towards ecosystems for responsible AI. In Conference on e-Business, e-Services and e-Society (pp. 220-232). Springer. https://link.springer.com/chapter/10.1007/978-3-030-85447-8_20
- Building upon the sociology of expectations in order to analyze five key EU documents on AI, this chapter builds a framework of cognitive and normative expectations related to sociotechnical systems, agendas, and networks. It finds that the documents analyzed are grounded in four themes that correlate to cognitive or normative sociotechnical systems, agendas, and networks. These include trust, ethics/competitiveness, European value approaches, and Europe as a global leader in AI development.
- Mitrou, L. (2018). Data protection, artificial intelligence and cognitive services: Is the general data protection regulation (GDPR) artificial intelligence-proof’? SSRN. http://dx.doi.org/10.2139/ssrn.3386914
- This paper provides a detailed overview of the EU’s General Data Protection Regulation provisions in the context of recent AI technologies. The author observes the changes that AI has made to the processing of personal information, and questions whether the current regulations are ‘AI-proof’ and whether new protections and rules need to be implemented in the face of advanced AI technology.
- Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501-507. https://www.nature.com/articles/s42256-019-0114-4.pdf
- This paper argues that agreement and consensus for AI principles might provide a false warrant for ethical or trustworthy AI. The author shifts the focus from the principles approach to understanding the ethical challenges associated with AI by contrasting it with medical ethics. Observing that AI development lacks common aims, a professional history, established methods for transforming principles into practice, as well as legal or accountability mechanisms, the author cautions against the praise of high-level consensus of principles.
- Palladino, N. (2021). The role of epistemic communities in the “constitutionalization” of internet governance: The example of the European Commission High-Level Expert Group on Artificial Intelligence. Telecommunications Policy, 45(6), 102149-.https://doi.org/10.1016/j.telpol.2021.102149
- This paper analyzes the EU’s HLEG on Trustworthy AI from the perspective of digital constitutionalism – an Internet Governance approach for promoting the development of digital technology by the people. However, the non-obligatory nature and a host of other factors have limited the effectiveness of this approach, and as such the authors emphasize epistemic communities as the solution, situating it within the context of the HLEG-AI as a successful example.
- Renda, A. (2019).* Artificial intelligence: Ethics, governance and policy challenges. Centre for European Policy Studies Task Force.
- This article summarizes the results of the Centre for European Policy Studies (CEPS) report on AI in 2018. The report finds that the EU is uniquely positioned to lead the globe in its effort to develop and implement responsible and sustainable AI. The report calls upon member states to focus their agendas on leveraging this advantage to foster further development in the field. The article proposes forty-four recommendations to guide future policy and investment decisions related to the design of lawful, responsible, and sustainable AI for the future.
- Renda, A. (2018).* Ethics, algorithms and self-driving cars–A CSI of the ‘trolley problem’. CEPS Policy Insight, (2). https://ssrn.com/abstract=3131522
- This article re-examines trolley-problem dilemma and argues against the view that it serves little use as an analogue to the automated driving context. The author engages in an investigation of the problem to reveal a number of neglected policy issues that exist within the dilemma and evade public discussion. The article also argues that current legal frameworks are unable to account for these issues and that these ethical and policy dilemmas must be accounted for in order to appropriately overhaul the relevant public policies in the European context.
- Rieder, G., et al. (2020). Mapping the stony road toward trustworthy AI: Expectations, problems, conundrums. In M. Pelillo & T. Scantamburlo (Eds.), Machines we trust: Perspectives on dependable AI. MIT Press
- This chapter critically discusses the concept of trustworthy AI by turning to notions of ‘trustworthiness’ and ‘trust’ in the philosophical literature. The authors argue that a standardized certification process would not necessarily yield the moral requirements for a trustworthy AI, and that trust should be directed at political and democratic aspects of AI for establishing a trustworthy AI culture.
- Roberts, H., et al. (2021). Achieving a “good AI society”: Comparing the aims and progress of the EU and the US. Science and Engineering Ethics, 27(6), 68–68. https://doi.org/10.1007/s11948-021-00340-7
- Contrasting the AI strategies of both the United States and the European Union, this article discusses how each governmental body approaches the notion of a ‘Good AI Society.’ It also addresses the on-the-ground reality of visionary implementations in addition to the potential issues or incompatibilities that might arise in transatlantic partnerships. It concludes that the prioritization of the individual in the EU, when compared to the strategy of national competitiveness in the U.S., is ethically the more desirable outcome with respect to domestic AI governance (prognosticating friction between the two strategies) and suggests ways for both entities to improve upon their approach.
- Ryan, M. (2020). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749-2767. https://doi.org/10.1007/s11948-020-00228-y
- This paper offers a conceptual criticism of the EU’s HLEG on AI for using the term ‘trustworthy’ for describing artificial intelligence. The author posits that doing so anthropomorphizes AIs, damages how we see interpersonal trust, and redirects responsibility from those who should be deemed responsible.
- Sharkov, G., et al. (2021). Strategies, policies, and standards in the EU towards a roadmap for robust and trustworthy AI certification. Information & Security, 50(1), 11–22. https://doi.org/10.11610/isij.5030
- Arguing that defining standards and guidelines for ethical AI use in the EU is not enough, Sharakov, Todorova, and Varbanov stress the importance of proper AI system governance and certification. They emphasize that proper regimented oversight and accreditation is the next step in the process while observing that this will not be without its challenges.
- Shneiderman, B. (2020). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems, 10(4), 1-31. https://doi.org/10.1145/3419764
- This article provides 15 recommendations at three different levels of governance to help bridge the gap between reliable principles for human-centered AI and current governance of this technology. The three main recommendations for policymakers are: (1) use sound software engineering practices, (2) build a safety culture through business management strategies, and (3) include independent oversight to certify different trustworthiness properties.
- Smuha, N. A. (2019). The EU approach to ethics guidelines for trustworthy artificial intelligence. CRi-Computer Law Review International, 20(4), 97-106. https://ssrn.com/abstract=3443537
- This article reviews the AI ethics guidelines offered by the High-Level Expert Group on AI (AI HLEG) established by the European Commission. The author explicates the context, aim, and purpose of the guidelines, while considering key issues of AI ethics and governance. The author concludes by positioning the guidelines in an international context and suggests future goals.
- Stix, C. (2021). Actionable principles for artificial intelligence policy: Three pathways. Science and Engineering Ethics, 27(1), 15–15. https://doi.org/10.1007/s11948-020-00277-3
- Due to the common failure of ethical principles to be actioned in policy, the authors build upon elements from the EU HLEG’s ethical guidelines to suggest a framework for ‘Actionable Principles for AI’. Their framework includes landscape assessments, various stakeholders, and mechanisms to support implementation.
- Straus, J. (2021). Artificial intelligence–Challenges and chances for Europe. European Review, 29(1), 142-158. https://doi.org/10.1017/S1062798720001106
- This review examines the current guidelines being proposed by the EU for trustworthy AI. The authors argue that the guidelines are important, but, since the guidelines only help AI solutions become accepted by society, they are only the first step for Europe to become a leader in AI. Finally, the authors argue that the current claim from the EU, that these guidelines will make Europe an AI leader, is currently unfounded.
- Sutrop, M. (2019). Should we trust artificial intelligence? Trames, 23(4), 499–522. https://doi.org/10.3176/tr.2019.4.07
- This paper criticizes the EU’s HLEG on AI for ignoring a common distinction in the philosophical literature between trust and reliance, failing to account for what trust is, and how to settle between conflicting values. Instead, it offers a conceptual analysis of trust, a discussion of risk, and points out the crucial disagreements about aligning ethical values.
- Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751-752. https://doi.org/10.1126/science.aat5991
- This article elaborates on the benefits that AI can offer from a European perspective. The author argues that regulation is not sufficient for the development of ‘good’ AI and that ethics must play a role in the design of technologies by regulating existing regulations to balance the risks and rewards of AI capabilities. The authors argue for the critical importance of a human-centric AI with a view to solving major societal problems.
- Toreini, E., et al. (2020). The relationship between trust in AI and trustworthy machine learning technologies. In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 272-283). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372834
- Toreini et al. describe a systematic approach to align notions of trust from the social sciences to notions of trust for services and products that use AI. The authors start with the Ability, Benevolence, Integrity, and Predictability framework that is mapped to the four machine learning trustworthiness qualities of Fairness, Explainability, Auditability, and Safety. Finally, they discuss how their framework relates to existing AI frameworks produced by various governments.
- Treleaven, P., et al. (2019). Algorithms: Law and regulation. Computer, 52(2), 32-40. https://ieeexplore.ieee.org/document/8672418
- This article offers important context for the challenges and problems with the regulation of algorithms through legal frameworks and examines their current legality. The authors focus on a variety of algorithmic applications and investigate the associated ethical, legal, and technical problems of each, proposing a variety of solutions and suggestions for regulation where they deem it necessary.
- Villaronga, E., et al. (2018). Humans forget, machines remember: Artificial intelligence and the right to be forgotten. Computer Law & Security Review, 34(2), 304-313. https://doi.org/10.1016/j.clsr.2017.08.007
- This article explains ‘the right to be forgotten’ and its application to AI, transparency, and EU privacy law. The authors consider legal and technical issues of data deletion requirements and regulations to conclude that it may not currently be possible to achieve the legal aims of the ‘right to be forgotten’ in the context of AI applications.
- Wachter, S., et al. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99. https://doi.org/10.1093/idpl/ipx005
- This article considers the state of AI decision-making in the EU after the implementation of the GDPR which stipulated a legal mandate for a ‘right to explanation’ for all automated decisions. The authors question the existence and feasibility of such a right in current EU laws and argue that the language in regulation boils down to a ‘right to be informed’. The authors argue that the GDPR lacks the necessary language and explicit rights to protect citizens from problematic automated decision-making.
An asterisk (*) after a reference indicates that it was included among the Further Readings listed at the end of the Handbook chapter by its author.