II. Frameworks & Modes

Chapter 4. AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing (Karen Yeung, Andrew Howes and Ganna Pogrebna)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.5

  • Adams, R., & Loideáin, N. N. (2019). Addressing indirect discrimination and gender stereotypes in AI virtual personal assistants: The role of international human rights law. Cambridge International Law Journal, 8(2), 241-257. https://doi.org/10.4337/cilj.2019.02.04
    • This article explores how the obligation to protect women from discrimination under international human rights law applies to AI virtual assistants. In particular, the article focuses on gender stereotyping associated with AI virtual assistants, including systems that use female names, voices, and characters.
  • Aizenberg, E., & van den Hoven, J. (2020). Designing for human rights in A.I. Big Data & Society, 7(2), 1
    • This paper highlights a gap between the social and technical aspects of A.I. design and presents a systematic framework to implement human rights values in the A.I. design process. The author presents a “Design for Values” framework, which requires making social values, such as dignity and equality, a part of A.I. design.
  • Algorithm Watch. (2019).* AI Ethics Guidelines Global Inventory. https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/ 
    • This is a global inventory of ethical guidelines for Artificial Intelligence (AI). The authors find that the absence of internal enforcement or governance mechanisms shows that many companies are merely “virtue signaling” with their guidelines. However, others can still try to hold the companies to account, be it the companies’ own employees, outside institutions like advocacy organizations, or academics.
  • Bietti, E. (2020). From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy. In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 210-219). Association for Computing Machinery.
    • This article addresses two related phenomena concerning AI ethics: (a) “ethics washing,” i.e., the self-serving exploitation of ethics discourse by technology companies; and (b) “ethics basing,” i.e., the criticism and trivialization of ethical discourse by social scientists. The article rejects both these approaches and contends that ethics and moral philosophy have important roles to play in shaping AI policy.
  • Burkell, J., & Bailey, J. (2018). Unlawful distinctions? Canadian human rights law and algorithmic bias. Canadian Yearbook of Human Rights, 2, 217-230.
    • This article examines the relationship between algorithmic discrimination and Canadian human rights law. Highlighting the potential discriminatory impact of AI in employment contexts, the provision of public services, and elsewhere, the paper illustrates how harms arising primarily from statistical correlations pose challenges for the application of human rights law.
  • Casanovas, P., et al. (2019). The middle-out approach: Assessing models of legal governance in data protection, artificial intelligence and the web of data. The Theory and Practice of Legislation, 7(1), 1-25. 
    • This paper focuses on what lies between top-down and bottom-up approaches to governance and regulation, namely the middle-out interface that is typically associated with forms of co-regulation. From a methodological viewpoint, this paper examines the middle-out approach in order to shed light on three different kinds of issues: (i) how to strike a balance between multiple regulatory systems; (ii) how to align primary and secondary rules of the law; and (iii) how to properly coordinate bottom-up and top-down policy choices. The paper argues that increasing complexity of technological regulation recommends new models of governance that revolve around this middle-out analytical ground. 
  • Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Mathematical, Physical and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.0080
    • This paper is the introduction to the special issue entitled: ‘Governing artificial intelligence: ethical, legal and technical opportunities and challenges’. The issue addresses how AI can be designed and governed to be accountable, fair and transparent. Eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems.
  • Council of Europe Consultative Committee on the Convention for the Protection of Individuals with Regard to Automating Processing of Personal Data. (2019).* Guidelines on artificial intelligence and data protection. https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8
    • These guidelines, created by the Council of Europe, provide a set of baseline measures that governments, AI developers, manufacturers, and service providers should follow to ensure that AI applications do not undermine the human dignity, human rights, and fundamental freedoms of every individual. These guidelines have a particular focus on the right to data protection.
  • Dawes, J. (2020). Speculative human rights: Artificial intelligence and the future of the human. Human Rights Quarterly, 42(3), 573–593.
    • This paper speculates about the different human rights concerns that can be posed by the development of AI in the future. The author focuses on five different dimensions of concern. The different concerns include AI reducing the value and need for human labor; augmenting humans with AI; issues with privacy; problems with autonomous weapons that are out of human control; and issues surrounding AI developing a consciousness and thus requiring personhood. The paper concludes that a human rights framework can be useful for thinking about these questions before they become a reality.
  • Donahoe, E., & Metzger, M. (2019). Artificial intelligence and human rights. Journal of Democracy, 30(2), 115-126.
    • This article argues for a global governance framework to address the wide range of societal challenges associated with AI, including threats to privacy, information access, and the right to equal protection and nondiscrimination. Rather than working to develop new frameworks from scratch, the authors propose that the challenges associated with AI can best be confronted by drawing on the existing international human-rights framework.
  • Fukuda-Parr, S., & Gibbons, E. (2021). Emerging consensus on ‘Ethical AI’: human rights critique of stakeholder guidelines. Global Policy12(6), 32–44.
    • This paper contrasts an ethics framework and a human rights framework for the regulation of AI. The paper examines guidelines for the use of AI developed by organizations and argues that they overwhelmingly use ethics principles, which tend to be weak, and lack universal standards and agreement. In contrast, human rights frameworks are clearly defined, enforceable, as well as nationally and internationally applicable. The paper finds that human rights are commonly referenced in AI guidelines, yet there is a disproportionate focus on privacy rights. A true human rights framework encompasses equality, a focus on preventing bias, the right for people to participate in decision-making, ability for people to hold stakeholders accountable, enforce consequences, as well as require organizations to proactively promote human rights. The paper ultimately concludes in favor of a human rights framework for the regulation of AI.
  • Ghallab, M. (2019). Responsible AI: Requirements and challenges. AI Perspectives1(1), 1-7. 
    • This paper discusses the requirements and challenges for responsible AI with respect to two interdependent objectives: (1) how to foster research and development efforts toward socially beneficial applications, and (2) how to take into account and mitigate the human and social risks of AI systems.
  • Hildebrandt, M. (2015).* Smart technologies and the end(s) of law. Edward Elgar. 
    • This book highlights how the pervasive employment of machine-learning technologies that inform so-called ‘data-driven agency’ threaten privacy, identity, autonomy, non-discrimination, due process and the presumption of innocence. The author argues that smart technologies undermine, reconfigure and overrule the ends of the law in a constitutional democracy, jeopardizing law as an instrument of justice, legal certainty and the public good. However, the author calls on lawyers, computer scientists and civil society not to reject smart technologies, arguing that further engaging with these technologies may help to reinvent the effective protection of the rule of law.
  • Hildebrandt, M. (2020). HCI sustaining the rule of law and democracy: A European perspective. Interactions28(1), 34–37.
    • This paper advocates for human centered A.I. The paper highlights concerns about human-computer interaction (HCI) and raises questions about the legal requirements of governing such interactions. The author argues that there needs to be greater emphasis on the rights of end-users, specifically when it comes to consent, in order to create A.I. which has “legal protection by design.”
  • Hoffmann-Riem, W. (2020). Artificial intelligence as a challenge for law and regulation. In Regulating artificial intelligence (pp. 1-29). Springer.
    • This chapter of Regulating Artificial Intelligence explores the types of rules and regulations that are currently available to regulate AI, while emphasizing that it is not enough to trust that companies that use AI will adhere to ethical principles. Rather, supplementary legal rules are needed, as company self-regulation is insufficient to promote ethical use of AI. The chapter concludes by stressing the need for transnational agreements and institutions in this area.
  • Hopkins, A. (2012).* Explaining “safety case.” Regulatory Institutions Network Working Paper 87. https://www.csb.gov/assets/1/7/workingpaper_87.pdf
    • This paper emphasizes features of safety case regimes that are sometimes taken for granted in their respective jurisdictions and sets out a model of what might be described as a mature safety case regime. There are five basic features of safety case regimes that are highlighted in this paper: a risk- or hazard-management framework, a requirement to make the case to the regulator, a competent and independent regulator, workforce involvement, and a general duty of care imposed on the operator. 
  • Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
    • This paper analyses the content of a collection of 84 documents containing AI ethics principles and guidelines. The results of the analysis suggest that there is a global convergence around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility, and privacy), yet divergence in relation to the interpretation, importance, and implementation of these principles.
  • Kloza, D., et al. (2017).* Data protection impact assessments in the European Union: Complementing the new legal framework towards a more robust protection of individuals. Brussels Laboratory for Data Protection & Privacy Impact Assessments. 
    • This policy brief provides recommendations for the European Union (EU) to complement the requirement for data protection impact assessment (DPIA), as set forth in the General Data Protection Regulation (GDPR), with a view of achieving a more robust protection of personal data. The policy brief attempts to draft a best practice for a generic type of impact assessment to remedy weak points in the DPIA requirement. The brief also provides background information on impact assessments as such: definition, historical overview, and their merits and drawbacks, and concludes by offering recommendations for complementing the DPIA requirement in the GDPR. 
  • Kriebitz, A., & Lütge, C. (2020). Artificial intelligence and human rights: A business ethical assessment. Business and Human Rights Journal5(1), 84–104.
    • This paper explores the human rights obligations that corporations have when developing and using A.I. The paper proposes that there is a conflict between artificial intelligence and human rights as many facets of A.I., by their very nature, violate human rights such as autonomy. The article also highlights the ways in which artificial intelligence can safeguard human rights such as by facilitating economic growth and education. The paper concludes that in cases where there is a conflict between A.I. and human rights, companies need to develop mechanisms to safeguard human rights, such as obtaining consent when using big data, engaging in transparent decision-making, and developing systems to remedy human rights violations if and when they occur.
  • Mantalero, A. (2018).* AI and data protection, challenges and possible remedies. Council of Europe. https://rm.coe.int/artificial-intelligence-and-data-protection-challenges-and-possible-re/168091f8a6
    • This report examines the current landscape of AI regulation and data protection. It argues that it is important to extend European regulatory leadership in the field of data protection to a value-oriented regulation of AI based on the following three precepts: values-based approach (encompassing social and ethical values), risk assessment, and management and participation. 
  • McGregor, L. (2018). Accountability for governance choices in artificial intelligence: Afterword to Eyal Benvenisti’s foreword. European Journal of International Law, 29(4), 1079-1085.
    • This paper argues that if the ‘culture of accountability’ is to adapt to the challenges posed by new and emerging technologies, the focus cannot only be technology-led. It further argues that a culture of accountability must also be interrogative of the governance choices that are made within organizations, particularly those vested with public functions at the international and national level. 
  • Molnar, P. (2019). Technology on the margins: AI and global migration management from a human rights perspective. Cambridge International Law Journal8(2), 305-330. https://doi.org/10.4337/cilj.2019.02.07
    • This article describes the ways in which AI deployed in migration contexts can violate human rights. The article contends that the lack of applicable regulation is deliberate, as states seek to use migration as a testing ground for high-risk technologies. In light of these observations, the article concludes that a global accountability framework is necessary to mitigate these harms.
  • Mpinga, E. K., et al. (2022). Artificial intelligence and human rights: Are there signs of an emerging discipline? A systematic review. Journal of Multidisciplinary Healthcare15, 235–246.
    • This paper presents a literature review and tracks the evolution of the discipline of artificial intelligence and human rights. The research found that sources commenting upon the relationship between human rights and AI discussed concerns about autonomous weapons, big data, algorithmic decision-making, robotics, and intellectual property. The research concluded that AI and human rights is not yet an established discipline of study because it does not have academic recognition, is not professionally established, and faces challenges with defining clear subject boundaries and combining two different fields of study such as science and human rights. However, the paper argues that AI and human rights is slowly emerging as a distinct discipline.
  • Murray, D. (2020). Using human rights law to inform states’ decisions to deploy AI. AJIL Unbound,114, 158-162. https://doi.org/10.1017/aju.2020.30
    • This essay explores the challenges involved in applying human rights law to states’ decisions to deploy AI technologies. Using the case study of live facial recognition, the essay identifies several steps that states should take when deciding whether or not to deploy AI tools in order to ensure compliance with human rights law and norms. 
  • Nemitz, P. (2018).* Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2133), https://doi.org/10.1098/rsta.2018.0089
    • This paper describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and functioning markets. It then recalls the experience with the lawless Internet and the historical relationship between technology and the law. The authors move on to key questions of AI and democracy, including which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process. 
  • Raso, F. A., et al. (2018).* Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center for Internet & Society Research Publication. http://nrs.harvard.edu/urn-3:HUL.InstRepos:38021439
    • This report advances the emerging conversation on AI and human rights by evaluating the human rights impacts of six current uses of AI. The report’s framework recognizes that AI systems are not being deployed against a blank slate, but rather against the backdrop of social conditions that have complex pre-existing human rights impacts of their own.
  • Rieke, A., et al. (2018).* Public scrutiny of automated decisions: Early lessons and emerging methods. Upturn and Omidyar Network. https://omidyar.com/public-scrutiny-of-automated-decisions-early-lessons-and-emerging-methods/ 
    • This report maps out the landscape of public scrutiny of automated decision-making, both in terms of what civil society was or was not doing in this nascent sector and what laws and regulations were or were not in place to help regulate it. The report is based on extensive review of computer and social science literature, a broad array of real-world attempts to study automated systems, and dozens of conversations with global digital rights advocates, regulators, technologists, and industry representatives. 
  • Risse, M. (2019). Human rights and artificial intelligence: An urgently needed agenda. Human Rights Quarterly41(1), 1-16.
    • This article reviews short, medium, and long-term challenges for human rights posed by AI. It argues that among the short-term challenges are ways in which technology engages just about all rights on the UDHR, as exemplified through use of effectively discriminatory algorithms. It further asserts that medium-term challenges include changes in the nature of work that could call into question many people’s status as participants in society, and in the long-term humans may have to live with machines that are intellectually and possibly morally superior, even though this is highly speculative. 
  • Sartor, G. (2020). Artificial intelligence and human rights: Between law and ethics. Maastricht Journal of European and Comparative Law27(6), 705–719. 
    • This paper discusses the similarities and differences between the ethical and legal dimensions of regulating artificial intelligence and highlights the role that human rights play in uniting both dimensions. The paper highlights that ethics define the “good” use of AI, yet do not have the force of law. Legal regulations are concerned with enforcement of clearly defined rules; the legal interpretation can be influenced by ethical concerns, which can instigate changes in the law. The concept of a “right” is part of both the ethical and legal framework. Human rights form a bridge between law and ethics because they are concerned with moral interests that can be legally enforced. The paper concludes that it is important to understand these distinctions in order to apply these rights effectively.
  • Shackelford, S., et al. (2022). Should we trust a black box to safeguard human rights? A comparative analysis of AI governance. UCLA Journal of International Law and Foreign Affairs. https://escholarship.org/uc/item/1k39n4t9
    • This article analyzes more than 40 AI strategy documents of national governments. The findings suggest that states’ AI practices are converging around several specific principles, including human-centered design and public benefit. The article contends that such convergence signals the possibility of deepening international engagement in developing and promoting AI policy.
  • Smuha, N. A. (2020). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. Philosophy & Technology. http://dx.doi.org/10.2139/ssrn.3543112
    • This paper argues that, without elucidating the applicability and enforceability of human rights in the context of AI; adopting legal rules that concretize those rights where appropriate; enhancing existing enforcement mechanisms; and securing an underlying societal infrastructure that enables human rights in the first place, any human rights-based governance framework for AI risks falling short of its purpose.
  • Themistoklis, T. (2021). A.I and human rights. In Legal and Ethical Challenges of Artificial Intelligence from an International Law Perspective (pp. 131–147). Springer.
    • This chapter discusses how human rights are effective in regulating AI governance because they guarantee human-centrism and are part of jus cogens, which are the foundations and general principles of international law. This chapter discusses how human rights can regulate artificial intelligence and play a vital role in training AI to be human centric and provide an international framework to hold AI applications accountable. The paper also raises concerns regarding “post-humanism” or “trans-humanism” that may result from merging human and AI, and proposes that a human rights framework will aid in preventing humans from being subsumed by AI.
  • Truby, J. (2020). Governing artificial intelligence to benefit the UN Sustainable Development Goals. Sustainable Developmenthttps://doi.org/10.1002/sd.2048
    • This article proposes effective preemptive regulatory options to minimize scenarios of Artificial Intelligence (AI) damaging the U.N.’s Sustainable Development Goals. It explores internationally accepted principles of AI governance and argues for their implementation as regulatory requirements governing AI developers and coders, with compliance verified through algorithmic auditing. The article argues that proactively predicting such problems can enable continued AI innovation through well‐designed regulations adhering to international principles. 
  • Vestby, A., & Vestby, J. (2019). Machine learning and the police: Asking the right questions. Policing: A Journal of Policy and Practice. https://doi.org/10.1093/police/paz035
    • The article argues that important issues concerning Machine Learning (ML) decision models can be unveiled without detailed knowledge about the learning algorithm, empowering non-ML experts and stakeholders in debates over if, and how to, include them, for example, in the form of predictive policing. Non-ML experts can, and should, review ML models. We provide a ‘toolbox’ of questions about three elements of a decision model that can be fruitfully scrutinized by non-ML experts: the learning data, the learning goal, and constructivism. 
  • Yeung, K. (2018).* A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. Council of Europe. https://ssrn.com/abstract=3286027
    • This report examines the implications of digital technologies for the concept of responsibility, investigating where responsibility should lie for their adverse consequences. The study explores (a) how human rights and fundamental freedoms protected under the European Convention on Human Rights may be adversely affected by the development of AI technologies, and (b) how responsibility for those risks and consequences should be allocated.

Chapter 5. The Incompatible Incentives of Private Sector AI (Tom Slee)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.6

  • Bakir, V., & McStay, A. (2018). Fake news and the economy of emotions: Problems, causes, solutions. Digital Journalism, 6(2), 154-175. https://doi.org/10.1080/21670811.2017.1345645
    • This article conducts a qualitative analysis of Facebook posts submitted by the Breitbart organization, which is further supplemented by interviews of technologists, journalists, and firms during the 2017 South-by-Southwest event. The authors argue that fake news is a symptom of the rise of empathic media, or media designed to manipulate emotions, through algorithmic journalism. The authors recommend that the digital advertising industry be scrutinized for enabling misinformation through these techniques.
  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671-732. https://dx.doi.org/10.2139/ssrn.2477899
    • This seminal article uses American antidiscrimination law to argue the importance of disparate impact doctrine when considering the effects of big data algorithms. The authors advocate for a paradigm shift in antidiscrimination law as the nature of these algorithms calls into question what “fairness” and “discrimination” mean in the digital age. Their ideas reflect the growing movement around fairness, accountability, and transparency in the machine learning community. 
  • Bender, E. M., et al. (2021, March). On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623).
    • The authors discuss the ramifications of training and deploying large language models (LLMs), citing social, environmental, and financial costs that have been largely ignored. In specific, the paper considers the temporally-static nature of large text corpora and the biases encoded therein, as well as the increasing carbon footprint and training cost of LLMs. The authors conclude by expounding potential avenues that may mitigate the risks and harms associated with LLMs.
  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. John Wiley & Sons.
    • This book examines the modern-day relevance of the “Jim Crow” laws that enforced racial segregation in the Southern United States. The author argues that emerging technologies such as artificial intelligence can deepen inequities by “explicitly amplifying racial hierarchies,” even when they may seem neutral or benevolent at first glance. 
  • Blasimme, A., et al. (2019). Big data, precision medicine and private insurance: A delicate balancing act. Big Data & Society, 6(1). https://doi.org/10.1177%2F2053951719830111
    • Using national precision medicine initiatives as a case study, this article explores the tension between private insurers leveraging repositories of genetic and phenotypic data for economic gain and the utility of these databases as a public, scientific resource. Although the authors admit that information asymmetry between insurance companies and their policyholders still leads to risks in reduced research participation, adverse selection, and discrimination, they argue that a governance model underpinned by trustworthiness, openness, and evidence can balance these competing interests. 
  • Bowker, G. C., & Star, S. L. (2000).* Sorting things out: Classification and its consequences. MIT Press.
    • Classification, or the process of grouping something according to shared qualities or characteristics, is a foundational division of machine learning problems. This book examines how classification, as an information infrastructure, has shaped human society from social, moral, and political standpoints. The authors draw numerous examples from health and medicine (e.g. the International Classification of Diseases, classification of viruses) but also dedicate a chapter to racial classification during Apartheid.
  • Bucher, T. (2018). If… then: Algorithmic power and politics. Oxford University Press. http://dx.doi.org/10.1093/oso/9780190493028.001.0001
    • This book outlines how algorithms enter our social fabric and then act as political agents to “shape social and cultural life.” The author’s key contributions are: (a) offering a new ontology for algorithms, (b) identifying various forms of algorithm power and politics, and (c) providing a theoretical framework for the actions of algorithms. 
  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81, 77-91. http://proceedings.mlr.press/v81/buolamwini18a.html
    • This paper finds that existing benchmarks used for facial recognition and AI research are composed of a majority of lighter-skinned subjects. The authors propose an alternative with a balanced sample of different skin tones, and audits three commercial gender classifiers for faces. Performance is shown to be significantly worse for darker-skinned females, whereas all classifiers performed best for lighter-skinned males. These results illustrate the substantial racial disparities in algorithms that are actively deployed for automatic classification.
  • Calo, R., & Rosenblat, A. (2017). The taking economy: Uber, information, and power. Columbia Law Review, 117(6), 1623-1690.
    • Technology companies such as Uber and AirBnB have popularized the “sharing economy,” where goods and services are shared between private individuals over the internet. The authors argue that asymmetries of information and power are fundamental to understanding and critiquing the sharing economy. For an effective legal response to prevent these companies from abusing their users, the authors claim that regulators must gain insight into how digital data is manipulated and remove the incentives for abusing these asymmetries. 
  • Crawford, K., et al. (2019). AI Now 2019 Report. AI Now Institute at New York University. https://ainowinstitute.org
    • This report is part of an annual collaboration between researchers in both academia and industry on the implications of AI. The authors describe key recommendations for safeguarding against the deployment of high-risk, potentially harmful algorithms through government regulation and industry initiatives. They also summarize the most significant publications in the areas of AI fairness, accountability, and transparency, and contextualizes this work against contemporary issues like climate change.
  • Dauvergne, P. (2020). Is artificial intelligence greening global supply chains? Exposing the political economy of environmental costs. Review of International Political Economy, 1-23.
    • This paper explores the disconnect between the supposed and real-world impacts of artificial intelligence in managing supply chains. At a micro-level, AI facilitates gains in efficiency and productivity to reduce carbon emissions and power consumption at each stage of the supply chain. However, these gains are offset by macro-level acceleration of production, consumption, and extraction that incur large environmental costs.
  • Espeland, W. N., & Sauder, M. (2016).* Engines of anxiety: Academic rankings, reputation, and accountability. Russell Sage Foundation.
    • Goodhart’s Law states that: “When a measure becomes a target, it ceases to be a good measure.” This book explores how the rankings of United States law schools have profoundly shaped legal education through the creation of an all-defining hierarchy. Through the analysis of observational data and interviews with members of the legal profession, the authors have revealed that in the pursuit of maximizing their rankings, law schools have negatively impacted their students, educators, and administrators.
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
    • This book investigates how big data algorithms are systematically used to oppress the poor in the United States. The author’s approach is that of a storyteller, taking readers into the lives of individuals as they are “profiled, policed, and punished.” Social justice is central to the book’s argument as it advocates not for the feckless application of technology, but rather a deep, humane commitment to the eradication of poverty.
  • Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
    • This book provides a cultural analysis of major events on the Internet, revealing how Big Tech moderates online content. The author argues that, instead of the devolved, democratic space for social participation it was originally envisioned to be, the Web has become consumed by corporate agendas that shape online discourse. The author proposes that the debate over online platforms move towards a more critical discussion of the Web’s structural issues, as opposed to focusing on individual controversies. 
  • Gorwa, R. (2019). What is platform governance?. Information, Communication & Society, 22(6), 854-871.
    • The author argues that the governance of online platform companies demands reconsideration as they begin playing larger roles in the social and political aspects of society. Gorwa contextualizes current notions of platform governance, focusing on the concepts of self-governance, external governance, and co-governance. The author then posits normative principles for future work in platform governance.
  • Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
    • This book explores the origins and ramifications of the “ghost work” employed by Big Tech corporations. In order to support the operation of their vast online platforms and services, these corporations use a hidden labor force to perform crowdsourced microtasks such as data labeling, content moderation, and service fine-tuning. Employment through ghost work, the authors argue, arises paradoxically out of the development of AI-based automation that otherwise threatens traditional labor. In turn, growing concerns about this new underclass of workers need to be addressed, such as accountability, trust, and insufficient regulation of on-demand work.
  • Gritsenko, D., & Wood, M. (2022). Algorithmic governance: A modes of governance approach. Regulation & Governance, 16(1), 45-62.
    • This paper hybridizes notions of governance from political science and behavioral economics to provide a new lens to understand the design and regulation of algorithms. The authors use speeding, disinformation, and social sharing as case studies of hierarchical governance, self-governance, and co-governance, respectively. The case studies highlight that AI will engender changes in our conceptions of all three forms of governance, but the impact is unequal across the three.
  • Harcourt, B. E. (2008).* Against prediction: Profiling, policing, and punishing in an actuarial age. University of Chicago Press.
    • Actuarial science involves the application of mathematics and statistics to assess and manage risk. This book challenges the successfulness attributed to actuarial methods in criminal justice and argues they have instead warped the notion of “just punishment” and make life more difficult for the poor and marginalized. 
  • Hutchinson, B., et al. (2021). Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 560-575).
    • This paper posits that greater transparency and accountability are required to better understand the creation and evolution of datasets used to train machine learning algorithms. The authors list desiderata for the dataset documentation process and demonstrate how the software development lifecycle serves as a robust, existing framework to model this process.
  • Holstein, K. et al. (2019). Improving fairness in machine learning systems: What do industry practitioners need? In S. Brewster & G. Fitzpatrick (Eds.), Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-16). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300830
    • Seeking to evaluate how private-sector AI practitioners address fairness in machine learning, this paper conducts surveys and qualitative studies of experts in the industry. It finds that the real-world demands of developing fair AI algorithms are often misaligned with the research on fairness in machine learning. These difficulties are exacerbated by the multiple technical and organizational barriers impeding fairer sociotechnical systems. For example, the fairness literature focuses largely on designing unbiased algorithms in artificial, isolated tasks, whereas practitioners need support in curating datasets for use in rich, nuanced contexts. 
  • Jacobs, J. (1961).* The death and life of great American cities. Random House.
    • This book is a critique of urban planning the 1950s, arguing that problematic policy is to blame for the decline of neighborhoods across the United States. In the author’s view, cities take on a life akin to a biological organism where a healthy city is one characterized by diversity, a sense of community, and thriving streets that draw habitants into cafes, restaurants, and other places of gathering. The author contrasts the healthy city with government housing projects to demonstrate the separation of the haves and have nots, a trend that is now being automated with big data and machine learning algorithms.
  • Khan, L. M. (2016). Amazon’s antitrust paradox. The Yale Law Journal, 126(3), 710-805.
    • Antitrust laws exist to protect consumers from predatory or monopolistic business practices. The author argues current antitrust laws fail to capture the reality of Amazon’s position as a digital platform because Amazon is incentivized to pursue growth over profit and controls the infrastructure that enables its rivals to function. 
  • Kramer, A. D., et al. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788-8790. https://doi.org/10.1073/pnas.1320040111
    • In this article, researchers affiliated with Facebook detail a large-scale online experiment conducted on hundreds of thousands of users on the platform. By manipulating information available to users via the Facebook News Feed, they find evidence of emotional contagion through changes in users’ sharing behaviors. This paper received international publicity due to its controversial methodology, and its revelation of large-scale experiments run by private-sector online platforms.
  • Krutka, D. G., et al. (2021). Don’t be evil: Should we use Google in schools?. TechTrends, 65(4), 421-431.
    • This paper uses Google as a case study to explore the relation between Big Tech and the domain of education. The authors audit the ethical, legal, and pedagogical facets of using Google technology in schools. Their analysis highlights Google’s extraction and commodification of student data and lack of transparency in their operations, demonstrating a critical need to examine Big Tech’s role in schools more closely.  
  • MacKenzie, D. (2007).* An engine, not a camera: How financial models shape markets. MIT Press.
    • This book combines concepts from finance, sociology, and economics to argue that economic models not only capture trends about markets but rather shape them. The author contextualizes the argument through examples of financial crises that occurred in 1987 and 1998, although parallels can also be made to the 2007 subprime mortgage crisis. These concepts from economic models naturally can be extended to algorithms.
  • Madaio, M. A., et al. (2020, April). Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
    • This paper produces a concrete checklist for AI practitioners and organizations on how best to develop more ethical AI. The authors engaged in an iterative co-design process with AI practitioners to facilitate the checklist’s integration with existing team workflows. The authors also list key desiderata underlying the checklist to allow future organizations to customize the checklist as their practices evolve.
  • Massanari, A. (2017). #Gamergate and the Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329-346. https://doi.org/10.1177/1461444815608807
    • This article discusses toxicity in online communities through an ethnographic study of the Reddit platform. Specifically, it considers two instances of misogynist, anti-social behavior in Reddit subgroups that resulted in the systematic harassment of women. The author argues that the platform’s algorithmic content ranking and hands-off moderation rules come together to provide fertile ground for toxic cultures to flourish.
  • Mittelstadt, B. D., et al. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.
    • This paper puts forward key considerations that need to be addressed as decision-making and classification algorithms become more prevalent in mediating daily life. The authors outline a set of epistemic and normative concerns raised by AI algorithms to act as a framework for future discourse in the field, and maintain that the framework itself can be gradually improved as the field evolves. The authors conclude by connecting existing literature to the proposed framework and noting areas for further research.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
    • This book critiques the notion that search engines equally promote “all ideas, identities, and activities” and argues that they rather serve as a platform for racism and sexism. It stresses that results provided by Google, Bing, or other engines are not neutral but rather “reflect the political, social, and cultural values of the society [they] operate within.” In later chapters, the author extends the argument to the broader work conducted by professionals in library and information science. 
  • Obermeyer, Z, et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342
    • This article empirically analyzes a widely adopted algorithm used for predicting patient risk and allocating care in hospitals. The author finds that the algorithm is systematically biased against Black patients and suggests that one source of this flaw is the algorithm’s use of health care costs as a proxy for patient health. Due to latent biases in health data like reduced spending on care for Black patients, the author demonstrates that algorithms are especially prone to racial disparities when used in high-impact scenarios.
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
    • This book takes a wide survey of how big data algorithms affect society and draws examples from education, advertising, criminal justice, employment, and finance. The author places special emphasis on drawing attention to areas of society where it is not immediately clear that algorithms are making decisions. The three characteristics of a “Weapon of Math Destruction” include: (1) scale, (2) secrecy, and (3) destructiveness. 
  • Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
    • This book draws attention to the secrecy and complexity of algorithms being used on Wall Street and in Silicon Valley. The author also argues that demanding transparency is only part of the solution, and that the decisions of these algorithms must be held to the standards of fairness, non-discrimination, and openness to criticism.
  • Raghavan, M., et al. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In M. Hildebrandt & C. Castillo (Eds.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 469-481). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372828
    • In response to the increasing amount of public scrutiny on the use of algorithmic tools in private sector hiring, this paper conducts a qualitative survey of vendors providing algorithmic solutions for employee assessment. It identifies the features analyzed by the vendors such as video recordings, how the vendors claim to have validated their results, and whether fairness is considered. The authors conclude with policy and technical recommendations for ensuring more effective, appropriate, and fair algorithmic hiring practices.
  • Rosenblat, A. (2018). Uberland: How algorithms are rewriting the rules of work. University of California Press.
    • This book takes an ethnographic approach to unveil how Uber asserts control over its drivers and has also shaped the dialogue in areas such as sexual harassment and racial equity. Through interviews with drivers across the United States and Canada, the author grapples with ideas such as freedom, independence, and flexibility touted by the company while also illuminating its pervasive surveillance and information asymmetries.
  • Schelling, T. C. (1978).* Micromotives and macrobehavior. WW Norton & Company. 
    • This book expands on the idea of the “tipping point” first proposed by Morton Grodzins. The tipping point refers to when a group rapidly adopts a previously rare, and seemingly unimportant practice and undergoes a rapid, significant change as a result. A major theme in this book is “social sorting”, such as when neighborhoods cluster by race due to the preference of inhabitants to live around people that look like themselves.
  • Scott, J. C. (1998).* Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press.
    • This book offers a critique of the top-down social planning done by states around the world and insights into why they fail. Four conditions common to failed social planning initiatives include: (1) an attempt to impose order on society and nature, (2) a belief that science can improve all aspects of life, (3) willingness to resort to authoritarianism, and (4) a helpless civil society.
  • Sweeney, L. (2013). Discrimination in online ad delivery. Communications of the ACM, 56(5), 44-54. http://dx.doi.org/10.2139/ssrn.2208240
    • The author presents a quantitative investigation of the online advertisements recommended by Google’s Adsense when searching for different racially associated names in 2012. The author finds that searches for names associated with Black babies, including her own, almost always yielded ads suggestive of an arrest. This occurred regardless of whether individual names were attached to an actual arrest record. In contrast, far fewer ads generated for White-identifying names suggested criminality or arrest. 
  • Sunstein, C. R. (2018). #Republic: Divided democracy in the age of social media. Princeton University Press.
    • In this book, a founding scholar of nudge theory analyzes the risks of large, pervasive online platforms driven by personalization algorithms. The author argues that the major social dangers of the Internet lie in its enabling of self-insulation through filter bubbles and echo chambers, which in turn poses threats to democratic institutions. The author proposes potential regulatory and design changes to reduce polarization and improve deliberation online through uncertainty. For example, platforms can help users explore opposing viewpoints by implementing randomization features like a serendipity button, in contrast to highly tailored recommendations.
  • Vallas, S., & Schor, J. B. (2020). What do platforms do? Understanding the gig economy. Annual Review of Sociology, 46, 273-294.
    • This paper focuses on issues of platform governance as they pertain to labor and the gig economy. The authors present four prominent metaphors of platforms in contemporary scholarship and argue that a fifth metaphor that views platforms as distinct from existing market or network structures better captures the intricacies of platform dynamics. They conclude by highlighting the legal and regulatory struggles unique to platforms, and note areas of further research to manage these concerns.
  • Wachter, R. M., & Cassel, C. K. (2020). Sharing health care data with digital giants: Overcoming obstacles and reaping benefits while protecting patients. JAMA, 323(6), 507-508.
    • In response to the steady stream of news updates around the entry and involvement of the major technology companies (e.g. Google, Apple, Amazon) into healthcare, this commentary proposes ideals for a collaborative path forward. The author emphasizes transparency (especially around financial disclosures and conflicts of interest), direct consultation with patients/patient advocacy groups, and data security.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
    • This book draws a common thread between digital technology companies by arguing that they engage in “surveillance capitalism.” Surveillance capitalists provide free services for behavioral data, which are then used to create “prediction products” of future consumer behavior. These products are then traded in “behavioral futures markets,” which generates large amounts of wealth for surveillance capitalists. The author argues that surveillance capitalism is becoming a dominating force in not just economics, but society as a whole.

Chapter 6. Normative Modes: Codes and Standards (Paula Boddington)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.7

  • Arrieta, A. B., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
    • This paper aims to shed light on eXplainable AI (XAI), as this field is still gaining momentum through its focus on the practical deployment of artificial intelligence (AI) modes. The authors summarize explainability in the realm of machine learning, and what benefits are associated with AI activity sectors and their different normative tasks. Their goal is to help lay-persons understand the future research progressions in AI and XAI. Overall, they discuss the taxonomy on contributions related to explainability of different machine learning models, and the challenges that XAI considers.
  • Atkinson, P. (2009).* Ethics and ethnography. Twenty-First Century Society, 4(1), 17-30. http://doi.org/10.1080/17450140802648439
    • This paper, drawing on previous work, concentrates solely on how ethnographic research is conducted. Atkinson argues that the field lacks development, specifically as it relates to sociology and anthropology. Field research in ethnography has practical challenges for regulation, exposing an insufficient understanding of social life embedded into today’s regulatory regimes.
  • Balfour, D., et al. (2014).* Unmasking administrative evil. Routledge. 
    • This book argues that there is a deep-seated administrative evil present into the state of public affairs, resulting in crimes against humanity such as genocide. By performing duties in line with their occupation, agents can not only disregard their participation in this administrative evil, but also can suffer for moral inversion: participating in evil yet believing what they are doing to be morally good.
  • Banja, J., et al. (2021). Sharing and selling images: Ethical and regulatory considerations for radiologists. Journal of the American College of Radiology, 18(2), 298-304. 
    • This article covers the regulatory standards and ethical perspectives relevant to current data agreements, specifically those concerning data holders and how they uphold ethical and regulatory standards. The authors discuss four ways to address data sharing or selling arrangements specific to radiology. They examine “big data” systems and present the ethical and regulatory implications of sharing and selling images in radiology.
  • Baumer, D. L., et al. (2004).* Internet privacy law: A comparison between the United States and the European Union. Computers & Security, 23(5), 400-412. https://doi.org/10.1016/j.cose.2003.11.001 
    • This article compares privacy law in the United States to privacy law in the European Union, examining these laws as they relate to the regulation of websites and online service providers. A central issue to regulation is that privacy laws and practices vary by region, however the Internet is world-wide. 
  • Benkler, Y. (2019). Don’t let industry write the rules for AI. Nature, 569(7754), 161-162. doi.org/10.1038/d41586-019-01413-1
    • This article argues that technology companies seek to influence AI regulation for the benefit of their companies. To combat this, Benkler argues that governments need to use leverage to limit company influence on policy.
  • Boddington, P. (2017).* Towards a Code of Ethics for Artificial Intelligence. Springer. 
    • This book works toward understanding the task of producing ethical codes and regulations in the rapidly advancing field of artificial intelligence, examining ethical and practical issues in the development of these codes. Boddington’s book creates a resource for those who wish to address the ethical challenges of artificial intelligence research.
  • Castelvecchi, D. (2021). Prestigious AI meeting takes steps to improve ethics of research. Nature, 589(7840), 12-13.
    • This article notes how artificial intelligence (AI) research is subject to ethical scrutiny. The Neural Information Processing Systems (NeurIPS) conference demonstrates an increasing focus on harmful uses of AI technologies and how the AI community is increasingly aware of these consequences. Importantly, these meetings included conversations about policing AI and how ethical thinking should be the foundation of machine learning technologies.
  • Cath, C. (2018). Governing artificial intelligence: Ethical, legal, and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(1), 1-8. 
    • This article describes the proliferation of artificial intelligence across critical infrastructures and more mundane activities such as dating. Accounting for the potentialities and pitfalls of these varying applications, the author asks how AI might be designed and governed to be accountable, fair, and transparent. Exploring some of the technical, regulatory, and normative challenges of developing governance regimes for artificial intelligence, it provides an overview of recent developments in AI policy, including regulatory agendas, ethical frameworks, and technological prospects. The author concludes by making concrete suggestions for policymakers looking to manage the role of artificial intelligence in society.
  • Cath, C., et al. (2018). Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach. Science and Engineering Ethics, 24, 505–528. https://doi.org/10.1007/s11948-017-9901-7
    • This article compares three reports published in October 2016 by the White House, the European Parliament, and the United Kingdom House of Commons on how to prepare society for the emergence of AI. This article uses these reports to provide a framework for developing good AI policy. The authors argue that these reports fail to express a long-term strategy for developing a good AI society, and conclude with a two-pronged solution to fill this gap.
  • Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(1), 1-5.
    • This article introduces data ethics as a new and specialized branch of ethics committed to studying moral problems related to the rise of data-driven technologies. The authors explore the relationship between data ethics and computer and information ethics, highlighting the similarities and divergences between these related fields. Unlike computer and information ethics, which focuses on the generation, analysis, and transference of information, data ethics draws attention to both the content and nature of computational operations, including the interactions between hardware, software, and data that characterize contemporary technologies. The authors conclude by arguing that data ethics should proceed with a more macro-ethical approach, addressing the impacts of datafication within a holistic and inclusive framework.
  • Gunning, D. (2017).* Explainable Artificial Intelligence (XAI). Defense Advanced Research Projects Agency, DARPA/I20. 
    • This presentation outlines the need for user-friendly artificial intelligence, wherein users can understand, trust, and administer these entities. Current AI systems, while extremely useful, have greatly diminished effectiveness as the machines do not often explain their actions to users.
  • Gunning, D., & Aha, D. (2019). DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine, 40(2), 44-58. https://doi.org/10.1609/aimag.v40i2.2850
    • This article provides a detailed look into DARPA’s four-year explainable artificial intelligence (XAI) program. The XAI program aimed to develop AI systems whose operations can be understood and trusted by the user.
  • Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds & Machines 30, 99–120. https://doi.org/10.1007/s11023-020-09517-8
    • This article performs a semi-systematic analysis and comparison of 22 Ethical AI guidelines, highlighting omissions and commonalities. Hagendorff also examines how these ethical principles are implemented in research and creation of AI systems, and how this application can be improved.
  • House of Lords Select Committee on Artificial Intelligence. (2018).* AI in the UK: Ready, willing and able? Report of First Session 2017-19. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf  
    • This report considers the ethical, societal, and economic implications of the development of AI, concluding the United Kingdom has the potential to be a global leader in the field. The Select Committee on Artificial Intelligence finds that AI can potentially solve complex problems and improve productivity, and that potential risks can be mitigated.
  • Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
    • In recent years, private companies, academic institutions, and governments have created principles and ethical codes for artificial intelligence. Despite consensus that AI must be ethical, there is no widespread agreement about the requirements of ethical AI. This article maps and analyzes current ethical principles and codes as they relate to AI.
  • Kroll, J. A. (2021). Outlining traceability: A principle for operationalizing accountability in computing systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 758-771). Association for Computing Machinery. 
    • This paper examines how accountability can be accomplished by computing systems looking at what standards are needed for governable artificial intelligence (AI) and its traceability. It examines how the principles of traceability and accountability could be better articulated in AI standards and principles so that software systems are governed by systematic technologies. In sum, the paper explains how traceability can be preserved in AI systems to ultimately advance normative fidelity systems and processes.
  • Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(1), 1-14. 
    • In contrast to growing fears surrounding the inscrutability of algorithmic systems, this article argues that algorithms remain fundamentally understandable technologies. Though existing power structures often obfuscate algorithmic decision-making, the authors note that software systems are designed and operate according to specific choices, assumptions, and goals that can be deciphered and explained. Rather than acquiescing to the black-box of algorithms, then, the article concludes that digital governance frameworks should facilitate accessible and publicly accountable system design and evaluation methods, exposing and formalizing the assumptions and goals embedded in particular systems.
  • Kukutai, T., & Taylor, J. (2016). Indigenous data sovereignty: Toward an agenda. Australian National University Press. 
    • This book looks at the ways in which data collection, analysis, and disaggregation can be used to measure how the rights of indigenous peoples are being met, as well as what controls they are afforded over decision-making and data ownership. It explores the importance of financial, technical, and governmental resources in carrying out this work. It presents the concept of indigenous data sovereignty as representative of indigenous peoples’ right to maintain, control, and protect both their data and their heritage.
  • Lee, S. S. (2022). Philosophical evaluation of the conceptualisation of trust in the NHS’ code of conduct for artificial intelligence-driven technology. Journal of Medical Ethics, 48, 272-277. https://doi.org/10.1136/medethics-2020-106905
    • This article focuses on the United Kingdom’s Government Code of Conduct on data technologies in health care, specifically artificial intelligent (AI) technologies. It aims to evaluate the notion of trust in these AI technologies and the ethical implications of their use for health care systems. The author urges the Code of Conduct to emphasize the notion of value-based trust in terms of these AI technologies.
  • London, A. J. (2019). Artificial intelligence and black‐box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1), 15–21. https://doi.org/10.1002/hast.973
    • This article investigates algorithmic decision-making processes in the context of new medicine. This article explains how breakthroughs in machine learning can accelerate the development of artificial intelligent (AI) technologies by helping researchers to address the standards and codes in some of the most powerful machine learning technologies.
  • Marda, V. (2018). Artificial intelligence policy in India: A framework for engaging the limits of data-driven decision-making. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(1), 1-19.
    • This article looks at the rapid development and adoption of artificial intelligence initiatives across India. The author argues that, like many other jurisdictions, the risks and limitations of artificial intelligence are often only considered retroactively, after individual technologies have been deployed and their problems made clear. Looking at the different stages of machine learning development, the author suggests that technical constraints and potential pitfalls should be considered at the time of policy development, informing both aspirations and development processes. Engaging India’s current AI policy landscape directly, this proposed framework is applied to existing policy deliberations and tensions within the country. 
  • Martin, A., & Freeland, S. (2021). The advent of artificial intelligence in space activities: New legal challenges. Space Policy, 55(3). https://doi.org/10.1016/j.spacepol.2020.101408
    • This paper explores the development of artificial intelligence (AI) autonomous systems and presents the ethical and legal challenges posed by these technologies. The authors discuss how AI has implications for important social, economic, technological, legal, and ethical issues that need to be addressed. The authors analyze AI in the context of space systems and showcase the legal issues present in the deployment of AI-based autonomized systems. 
  • Metzinger, T. (2018). Towards a global artificial intelligence charter. In European Parliament Research Service (Ed.), Should we fear artificial intelligence? (pp. 27–33).
    • Metzinger argues that the debate in the public sphere on artificial intelligence must move into political institutions. These institutions must produce a set of ethical and legal constraints on the development and use of AI that are sufficient while remaining minimally intrusive. Metzinger provides a list of the five more important problem domains in the field of AI ethics, and gives recommendations for each. 
  • Mohamed, S., et al. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(1), 659-684. https://doi.org/10.48550/arXiv.2007.04068
    • Drawing on decolonial theories’ historicizing method, the authors argue that AI communities can better understand the patterns of power and coloniality that shape the world in which individual technologies are designed and deployed. Likewise, decolonial theory can help these communities to better align technological innovation with established ethical principles, centring vulnerable peoples in both policy and technological design. The authors close by suggesting three tactics that can form the bedrock of a decolonial approach to AI: critical technical practices, reverse tutelage and reverse pedagogies, and the renewal of affective communities. 
  • Niederer, S., & Chabot, R.T. (2015). Deconstructing the cloud: Responses to Big Data phenomena from social sciences, humanities, and the arts. Big Data & Society, 2(2), 1-9. https://doi.org/10.1177/2053951715594635
    • This article critically engages the metaphor of the Cloud that often appears in discussions of Big Data and machine learning. Promising endless storage space and freedom from spatiotemporal restrictions, the authors suggest that the idea of an ephemeral Cloud obfuscates the more complicated and embodied reality of digital infrastructures. Drawing on work from the humanities, the social sciences, and the arts, the article highlights the blend of hardware and software necessary to maintain and operate many contemporary technologies, including data analytics dashboards and smart technologies.
  • Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 1-14. https://doi.org/10.1098/rsta.2018.0089
    • This article suggests that contemporary trends in the development of artificial intelligence, such as existing concentrations of power, represent a threat to constitutional democracy. It asks how new technologies might be shaped to maintain and strengthen constitutional democracy instead, calling for the incorporation of democratic principles in the development of technological impact assessments moving forward. Looking at the historical relationship between technology and the law, the author clarifies which challenges of AI can be left to discretionary ethics and which need to be addressed proactively by encompassing and enforceable laws.
  • Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review79(119), 119-158.
    • This article argues that conventional theoretical approaches to privacy employed for common privacy concerns are not sufficient to yield appropriate conclusions in light of the development of public surveillance. Nissenbaum argues for a new construct, contextual integrity, that will act as a replacement for traditional theories of privacy. 
  • Panagia, D. (2021). On the possibilities of a political theory of algorithms. Political Theory 49(1), 109-133. 
    • This article aims to move beyond epistemic analysis and develop a political theory of algorithms. The author highlights the so-called “dispositional power” of algorithms, namely their ability to manage mobilities across both space and time. The author concludes by reflecting critically on the negative feedback loops involved in algorithmic operation and the more general value that a virtual ontology adds to contemporary political reflections. 
  • Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2), 1-14. 
    • This article suggests that the focus of data discourses has remained primarily technical, and that the discriminatory power of data has not yet been connected to a broader social justice agenda. The author proposes the idea of data justice as necessary to determine a broader data ethics moving forward. Three pillars are suggested for this notion of data justice: (in)visibility, (dis)engagement with technology, and antidiscrimination, each of which integrates both positive and negative rights and freedoms. 
  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019).* Ethically aligned fesign: A vision for prioritizing human well-being with autonomous and intelligent systems IEEE. https://ethicsinaction.ieee.org 
    • This treatise is a globally crowdsourced, collaborative source based on a previous call for input and two hundred pages of feedback. The treatise aims to provide practical insights, and to act as a reference work for professionals involved in the ethics of artificial intelligence. Included in the treatise are policy recommendations. 
  • Vinuesa, R., et al. (2020). The role of artificial intelligence in achieving the sustainable development goals. Nature Communications, 11(1), 233. https://doi.org/10.1038/s41467-019-14108-y
    • This article implements a consensus-based expert elicitation process to evaluate artificial intelligence (AI) technologies and their achievements in terms of sustainable development goals. The authors highlight how current research overlooks important factors associated with these technologies. Overall, they argue that AI systems need to be supported by regulatory schemes to ensure that these technologies are meeting all ethical, transparency, and safety standards.
  • Weller, A. (2017). Challenges for transparency. In W. Samek, G. Montavon, A. Vedaldi, L. Hansen, & K. R. Müller (Eds.), Explainable AI: Interpreting, explaining and visualizing Deep Learning (pp 23-40). Springer.
    • This chapter provides an overview of the concept of transparency, of which there are varying types, and whose definition varies based on context. It is difficult to determine objective criteria to measure transparency in light of this. Weller examines contexts wherein transparency can cause harm. 
  • Whittlestone, J., et al. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 195-200). https://doi.org/10.1145/3306618.3314289
    • This article draws on comparisons within the field of bioethics to highlight limitations of principles applied to AI ethical guidelines, such as fairness, privacy, and autonomy. The authors argue that the field of AI ethics needs to progress to exploring tensions that exist within these established principles. They offer potential solutions to these tensions. 
  • Winfield, A. F., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.0085
    • This paper examines ethical governance for artificial intelligence systems and robots. The authors argue that ethical governance is needed in order to create public trust in these new technologies. They conclude by proposing five pillars of effective ethical governance: ethical codes of conduct, ethics and RI training, responsible innovation, transparency, and sincerity. 
  • Winfield, A. F., et al. (2019). Machine ethics: The design and governance of ethical AI and autonomous systems. Proceedings of the IEEE, 107(3), 509-517. 
    • This paper focuses on the fourth industrial revolution, which includes AI, and machine learning systems, as discussed in the 2016 World Economic Forum at Davos. It argues that the economic and societal implications surrounding the fourth industrial revolution are no longer only of concern to academics, but rather, are important for politics and public debate. 
  • Zeng, Y., Lu, E., & Huangfu, C. (2018). Linking artificial intelligence principles. In Proceedings of the AAAI Workshop on Artificial Intelligence Safety (AAAI-Safe AI 2019).
    • In this article, the authors propose LAIP (Linking Artificial Intelligence Principles) as a framework for analyzing various AI principles. Rather than adopting one pre-developed set of AI principles, the authors propose combining existing frameworks, allowing for interaction.

Chapter 7. The Role of Professional Norms in the Governance of Artificial Intelligence (Urs Gasser and Carolyn Schmitt)

https://www.doi.org/10.1093/oxfordhb/9780190067397.013.8

  • Abbott, A. (1983).* Professional ethics. American Journal of Sociology88(5), 855-885. https://doi.org/10.1086/227762
    • Through comparative analysis, the author establishes five basic properties of professional ethics codes: universal distribution, correlation with intra-professional status, enforcement dependent on visibility, individualism, and emphasis on college obligations. The author then adds a third perspective, relating ethics directly to intra-and extra- professional status. Finally, the author analyzes developments in professional ethics in America since 1900, thus specifying the interplay of the three processes hypothesized in the competing perspectives.
  • Anthony, K. H. (2001).* Designing for diversity: Gender, race, and ethnicity in the architectural profession. University of Illinois Press.
    • The author argues that the traditional mismatch between diverse consumers and predominantly white male producers of the built environment, plus the shifting population balance toward communities of color, leads to the architectural profession’s lack of true diversity, at its own peril.
  • Bender, E. M., et al. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922
    • The authors discuss the recent technical norms in Natural Language Processing towards development and deployment of larger models, and point to their disproportionate impact on marginalized communities. They provide recommendations and a framework for approaching research and development goals by considering environmental, energy, and financial costs. Prior to publication, Google terminated the employment of authors Timnit Gebru and Margaret Mitchell. 
  • Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Springer International Publishing. https://doi.org/10.1007/978-3-319-60648-4
    • The author discusses the challenges in developing codes of ethics for artificial intelligence. They provide a description of various formats for codes of conduct, regulation, and guidance to address features common among professional settings and bodies that inform their development. The author also provides a review of some professional ethics codes and proposes developments to these professional codes for AI. They provide the institutional context for the historical development of professional codes of ethics and offer an approach for understanding and overcoming the challenges AI poses to the development of professional ethics. 
  • Bynum, T. W., & Simon, R. (2004).* Computer ethics and professional responsibility. Wiley Blackwell. 
    • The authors provide a discussion of topics such as the history of computing; the social context of computing; methods of ethical analysis; professional responsibility and codes of ethics; computer security, risks and liabilities; computer crime, viruses and hacking; data protection and privacy; intellectual property and the “open source” movement; and global ethics and the internet.
  • Cech, E., et al. (2011). Professional role confidence and gendered persistence in engineering. American Sociological Review, 76(5), 641-666. https://doi.org/10.1177/0003122411420815
    • The authors use surveys to analyze student persistence in engineering. They define the term “professional role confidence” – comprised of career-fit confidence and expertise confidence – which represents an individual’s confidence in their ability to successfully fill the roles, competencies, and identity features of a career. They find that this is a significant predictor of persistence in engineering, and women lack this confidence when compared to men.
  • Coldewey, D. (2018, June 7). Google’s new ‘AI principles’ forbid its use in weapons and human rights violations. Techcrunch. https://techcrunch.com/2018/06/07/googles-new-ai-principles-forbid-its-use-in-weapons-and-human-rights-violations/
    • The author reports on Google’s introduction of a set of AI principles that outlines how it will and will not deploy its technology. They describe the seven principles (be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence, and be made available for uses that accord with these principles), and discuss what Google claims its technology will not do. 
  • Davis, M. (2015).* Engineering as profession: Some methodological problems in its study. In S. Christensen, C. Didier, A. Jamison, M. Meganck, C. Mitcham, B. Newberry (eds), Engineering Identities, Epistemologies and Values. Philosophy of Engineering and Technology, vol 21. Springer. https://doi.org/10.1007/978-3-319-16172-3_4
    • The author considers engineering practice including contextual analyses of engineering identity, epistemologies, and values. They examine such issues as an engineering identity, engineering self-understandings enacted in the professional world, distinctive characters of engineering knowledge, and how engineering science and engineering design interact in practice.
  • Evetts, J. (2003).* The sociological analysis of professionalism: Occupational change in the modern world. International Sociology18(2), 395-415. https://doi.org/10.1177%2F0268580903018002005
    • The author explores the appeal of the concepts of profession and professionalism and the increased use of these concepts across various contexts. The author defines the field, examines two past interpretations of professionalism, describes a third interpretation developed in the 1990s, and considers how various aspects of professionalism are played out in different employment settings.
  • Frankel, M. S. (1989).* Professional codes: Why, how, and with what impact? Journal of Business Ethics8(2-3), 109-115. https://doi.org/10.1007/BF00382575
    • The author argues that a tension between a professions’ pursuit of autonomy and the public’s demand for accountability has led to the development of codes of ethics which act as both foundations and guides for professional conduct in the face of morally ambiguous situations. They identify three types of codes in the paper — aspirational, educational, and regulatory. 
  • Greene, D., et al.  (2019).* Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences, pp. 2122 – 2131. https://hdl.handle.net/10125/59651
    • The authors argue vision statements for ethical artificial intelligence and machine learning (AI/ML) co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work. They develop this argument using frame analysis to examine recent high-profile value statements endorsing ethical design for AI/ML.
  • Hagendorff, T. (2020). The ethics of AI ethics: an evaluation of guidelines. Minds and Machines, 30, 99-120. https://doi.org/10.1007/s11023-020-09517-8
    • The author reviews, analyzes, and compares 22 guidelines for ethical AI, highlighting areas of overlap and clear omissions. The author discusses AI in practice, including how commercial AI development and the AI race between countries impact the ethics of AI in practice. They conclude with a discussion of how AI ethics have changed over time. 
  • Husted, B. W., & Allen, D. B. (2000). Is it ethical to use ethics as strategy? In J. Sójka & J. Wempe (Eds.), Business challenging business ethics: New instruments for coping with diversity in international business (pp. 21-31). Springer.
    • The authors seek to define a strategy concept to situate the different approaches to the strategic use of ethics and social responsibility found in the current literature. Theythen analyze the ethics of such approaches using both utilitarianism and deontology perspectives and end by defining limits to the strategic use of ethics.
  • Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389-399. https://doi.org/10.1038/s42256-019-0088-2
    • Despite the multitude of organizations that have released ethical AI guidelines in recent years, the authors note that there still exists much debate over what is “ethical AI”. By analyzing the current guidelines and principles for ethical AI, the authors identify a convergence on five ethical principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy. However, they highlight that there is still much disagreement over how these principles are interpreted, why they are important, who they apply to, and how they should be implemented.
  • Johnson, A. M., Jr. (1997).* The underrepresentation of minorities in the legal profession: A critical race theorist’s perspective. Michigan Law Review95(4), 1005-1062. https://doi.org/10.2307/1290052
    • The author discusses the importance of the development of Critical Race Theory for the legal profession and larger society and seeks to explore whether Critical Race Theory can have a positive or any effect for those outside legal academia. 
  • Magarian, J., & Seering, W. (2021). Characterizing engineering work in a changing world: synthesis of a typology for engineering students’ occupational outcomes. Journal of Engineering Education, 110(2), 458-500. https://doi.org/10.1002/jee.20382
    • The authors aim to provide a typology for researchers to categorize students’ occupational outcomes, considering the increase in engineering-related occupations in recent years. They find that engineers’ possession of design responsibility is a unifying work attribute that continues to persist. The resulting responsibility-based typology can be used to help educators study student persistence as engineering careers change. 
  • Mattingly-Jordan, S., et al. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems glossary (1st ed. draft). Glossary Committee of The IEEE Global Initiative. https://standards.ieee.org/wp-content/uploads/import/documents/other/ead1e_glossary.pdf
    • This glossary provides reference definitions for terms appearing in the IEEE Ethically Aligned Design document. The goal of this document is to serve as a shared resource for interdisciplinary teams to understand common terms which have discipline specific meaning. The glossary gives 6 definitions for terms by referencing usage in discipline categories: ordinary language; computational disciplines; economics and social science; engineering disciplines; philosophy and ethics; and international law and policy.
  • McNamara, A., et al. (2018). Does ACM’s code of ethics change ethical decision making in software development?. In Proceedings of the 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’18). https://doi.org/10.1145/3236024.3264833 
    • Following the Association for Computing Machinery’s (ACM) code of ethics update, the authors replicated a behavioral ethics study with software engineering students and professionals to determine how codes of ethics change software related decisions. They found no effect of explicitly instructing participants to reference the code of ethics, and then ask what techniques, other than codes of ethics, may improve ethical decision making in software engineering.
  • Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1, 501-507. https://doi.org/10.1038/s42256-019-0114-4
    • The author argues that AI ethics meta-analyses have come to a set of four ethical principles that are similar to the principles of medical ethics; however, AI and medicine are significantly different. They state that AI currently lacks common aims and duties, professional history and norms, proven methods to move from principles to practice, and legal and professional accountability mechanisms. Thus, these principles hide political and normative disagreement and should still be revisited. 
  • National Science and Technology Council Committee on Technology. (2016). Preparing for the future of artificial intelligence. Executive Office of the President. https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf
    • This report surveys technical developments and makes specific recommendations to the Obama White House on advances in Artificial Intelligence. It proposes directives to Federal government agencies and other bodies. It discusses the role the government plays in developing the workforce and gives additional policy recommendations for monitoring and supporting AI research. It identifies fairness, safety, accountability, and governance as primary concerns. 
  • Noordegraaf, M. (2007).* From “pure” to “hybrid” professionalism: Present-day professionalism in ambiguous public domains. Administration & Society39(6), 761-785. https://doi.org/10.1177%2F0095399707304434
    • The author aims to answer the following questions: What is professionalism? What is professional control in ambiguous occupational domains? What happens when different types of occupational control get mixed up? It argues that the solution lies in portraying classic professionalism as “controlled content,” transitioning from “pure” to “hybrid” professionalism, and portraying present-day professionalism as “content of control” instead of controlled content.
  • O’Leary, D. E. (2019). GOOGLE’S Duplex: Pretending to be human*. International Journal of Intelligent Systems in Accounting and Finance Management, 26(1), 46-53. https://doi.org/10.1002/isaf.1443
    • The author analyses Google’s Duplex: a computer‐based system with natural language capabilities that performs tasks with a human sounding conversation, and some of the initial reactions to the system and its capabilities. The author uses the applications and characteristics of Duplex to investigate the ethics of pretending to be human and suggest that such impersonation is against evolving computer codes of ethics.
  • Oz, E. (1993).* Ethical standards for computer professionals: A comparative analysis of four major codes. Journal of Business Ethics12(9), 709-726. https://doi.org/10.1007/BF00881385
    • The author compares and evaluates the ethical codes of four major organizations of computer professionals in America. The author analyzes these ethical codes in context of the following obligations that every professional has: to society, to the employer, to clients, to colleagues, to the professional organization, and to the profession.
  • Panteli, A., et al. (1999).* Gender and professional ethics in the IT industry. Journal of Business Ethics22(1), 51-61. https://doi.org/10.1023/A:1006156102624
    • The authors discuss the ethical responsibility of the Information Technology (IT) industry towards its female workforce. They present evidence showing that the IT industry is not gender-neutral and that it does little to promote or retain its female workforce. Therefore, the authors urge that professional codes of ethics in IT should be revised to take into account the diverse needs of its staff.
  • Ren, K., & Olechowski, A. (2020). Gendered professional role confidence and persistence of artificial intelligence and machine learning students. Proceedings of the American Society of Engineering Education (ASEE) Annual Conference, Virtual. https://doi.org/10.18260/1-2–34704
    • The authors apply the concept of professional role confidence, comprised of career-fit confidence and expertise confidence, to student persistence in the field of machine learning and artificial intelligence. By analysing professional role confidence and other predictors of student persistence, the authors identify drivers of the gender gap in the field.
  • Rhode, D. L. (1994).* Gender and professional roles. Fordham Law Review63(1), 39-72.
    • The author, informed by contemporary feminist jurisprudence, discusses the following two issues. First, the challenges to professional roles, relationships, and the delivery of services. Then, issues of gender bias in the workplace and women’s underrepresentation in positions of the greatest power, status, and reward. Both discussions build on values traditionally associated with women that are undervalued in traditionally male-dominated professions. 
  • Rhode, D. L. (1997). The professionalism problem. William & Mary Law Review39(2), 283-326.
    • The author argues that given the increasing discontent with the legal profession, particularly in the form of criticism directed at ethical practices that have widened the gap between professional ideals and professional work, the existence of and solution to competing values must be acknowledged and created, as these issues are too significant to continue unmediated. 
  • Shapiro, S. P. (1987). The social control of impersonal trust. American Journal of Sociology93(3), 623-658. https://doi.org/10.1086/228791
    • The author discusses the ‘guardians of impersonal trust’ and how they create new problems. Particularly, that the resulting collection of procedural norms, structural constraints, entry restrictions, policing mechanisms, social-control specialists, and insurance-like arrangements increase the opportunities for abuse while encouraging less acceptable trustee performance.
  • Simonite, T. (2018, November 13). The DIY tinkerers harnessing the power of artificial intelligence. Wired. https://www.wired.com/story/diy-tinkerers-artificial-intelligence-smart-tech/
    • The author discusses how many of the tools needed to build AI technology are widely available for anyone to use, which has created a community of hackers and hobbyists using the same technology that powers Silicon Valley. The author reviews 4 case studies of AI hobbyists: a student teaching a computer to rap; a freshman who built a plant diagnosis app; a small business owner who built an algorithm to automatically check in clothes dropped off for dry cleaning; and a civil engineer who designed a miniature autonomous vehicle.
  • Stevens, B. (1994). An analysis of corporate ethical code studies: “Where do we go from here?” Journal of Business Ethics13(1), 63-69. https://doi.org/10.1007/BF00877156
    • The author seeks to differentiate between ethical codes, professional codes, and mission statements. Ethical code studies are then reviewed in terms of how codes are communicated to employees and whether implications for violating codes are discussed. Finally, the author discusses how such codes are communicated and accepted, and their impact on employees.
  • Susskind, R. E., & Susskind, D. (2015). The future of the professions: How technology will transform the work of human experts. Oxford University Press.
    • The authors discuss professions in the context of transformative technology. They give historical examples of the development of ideas about professions and detail and relate eight professional domains: health; education; divinity; law; journalism; management consulting; tax and audit; and architecture. They consider the possible impacts of technology on the practices of these professions.
  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems. Institute of Electrical and Electronics Engineers. https://standards.ieee.org/industry-connections/ec/ead-v1.html
    • This document aims to establish societal and policy guidelines for autonomous and intelligent systems to promote ethical and human-centric development. It provides a reference of pragmatic and directional recommendations for technologists, educators, and policymakers. The discussion includes scientific analysis, description of resources and tools, conceptual principles, and actionable advice. Specific guidance is outlined for standards, certification, regulation, legislation, design, manufacture, and use of these systems in professional organizations.
  • West, S. M., et al. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html
    • The authors show that there is a diversity crisis in the AI sector across gender and race. Thus, the authors argue that the AI industry must acknowledge the gravity of its diversity problem and admit that existing methods have failed to contend with the uneven distribution of power, and AI can reinforce such inequality. 
  • Wilkins, D. B. (1998). Identities and roles: Race, recognition, and professional responsibility. Maryland Law Review57(4), 1502-1594.
    • The author argues that issues relating to a lawyer’s non-professional identity, for example, gender, race, religion are omitted as motivation for lawyers to uphold the profession’s norms. The article also discusses narratives created in the law profession about the nature of the lawyer’s role, particularly the claim that a lawyer’s non-professional identity is (or at least ought to be) irrelevant to their professional role.
  • Zhang, B., & Dafoe, A. (2020). U.S. public opinion on the governance of artificial intelligence. AIES ’20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, U.S.A. https://doi.org/10.1145/3375627.3375827
    • The authors surveyed 2000 Americans on their perceptions of 13 AI governance challenges and how much they trust various institutions to responsibly develop and manage AI despite these challenges. They find that Americans perceive all challenges to be important, but they have little trust in the institutions to manage these tools. Specifically, they have the most trust in the U.S. military and university researchers to develop AI in the interest of the public, and the most trust in non-governmental organizations, partnerships on AI, and tech companies to manage AI.

An asterisk (*) after a reference indicates that it was included among the Further Readings listed at the end of the Handbook chapter by its author.