Valerio De Stefano, Algorithmic Bosses and How to Tame Them [2020 C4eJ 52] (Symposium)

ALGORITHMIC BOSSES AND HOW TO TAME THEM
[➡︎ read the rest of the symposium]

Valerio De Stefano*

I. Introduction

In 2013, the publication of a now world-famous paper by Oxford scholars Carl Frey and Michael Osborne on “how susceptible jobs are to computerization” spurred a gargantuan debate on automation and the related threats of unrestrained job losses and mass unemployment.[1] Arguably, this debate focused overwhelmingly on the number of jobs that will be lost to automation. Indeed, the academic and policy debate on these issues has largely adopted a “quantitative” approach, trying to estimate the number of workers that could be put out of a job as a consequence of technologic breakthroughs.[2] Some studies have criticized these estimates, pointing out some of their possible flaws and also concentrating on the potential benefits of technological progress in terms of job creation.[3] So far, however, this debate has not sufficiently considered the qualitative aspects connected with job automation. Much less attention, in fact, has been devoted to the quality of the jobs that will remain, but that will require growing interactions between humans and technological tools, both in the forms of advanced machinery and of software used to manage businesses and production processes.[4]

It almost seems taken for granted that these “jobs of the future” will require high technical skills, that new machinery and programmes, complemented by artificial intelligence, will absorb routine, menial and dangerous tasks and that the fortunate workers who remain employed will have access to highly rewarding jobs, with technology playing a liberating role for them. Therefore, according to this view, instead of focusing on the quality of these jobs, regulators should be concerned in making sure that the highest number of persons possible acquire the skills necessary to be employed in these liberated roles; they should also envisage measures to absorb occupational shocks determined by automation and to mitigate its social consequences for workers that will be displaced and will not be able to develop these high-level skills or will not find employment because there will be fewer jobs available.[5]

This narrative, however, follows a techno-deterministic approach that should be called into question. To begin with, it assumes that technological breakthroughs will always imply progress, particularly for the fortunate workers who have developed the skills to remain in employment after the introduction of new machinery and business processes. This assumption, however, risks proving excessively optimistic. While it is probably true that technology will be able to automate some routine and unpleasant tasks, it will also increase the possibility of management to increasingly monitor working activities in a way that is not desirable for worker.[6] Software and hardware are already spreading in modern workplaces that allow management to give workers instructions on the work they do and to control their performance through digital tools.[7] Artificial intelligence, the use big data and “management-by-algorithm” are already a reality in the world of work,[8] potentially leading to very intrusive business practices. The risks connected to these practices are almost absent from the mainstream debate on the future of work and on the effects of automation, even if, as argued below, the introduction of advanced machinery in the workplace can materially spur these risks.

Another assumption that follows this techno-deterministic approach is that these developments are inevitable – in other terms, they are the price to pay to benefit from the rewards of technological progress. Accordingly, limiting the functioning of new technologies at the workplace would inescapably reduce progress for economies and societies at large, supposing that these limits could theoretically be imposed through regulation. Moreover, the mainstream narrative on automation also risks leading to the impression that regulation over the introduction of new technological tools and machinery and their implications on the quantity and quality of jobs cannot be put in place and that any attempt to govern the effects of technological breakthroughs would hamper innovation and lead to economic losses.

These assumptions must all be questioned. Regulation aimed at mitigating the potentially detrimental effects of the use of technological devices on job quality and workers’ human dignity already exists in various countries of the world. Moreover, many jurisdictions already have in place regulation aimed at mitigating the social impact of mass redundancies and job losses, also connected to automation and technological innovation. A detrimental economic impact from this regulation has not been proved. On the contrary, strong involvement of social partners and regulators in the management of potential mass redundancies is associated with high levels of productivity and innovation, in addition to the benefits for workers.[9]

Most importantly, regulation is also fundamental in governing how automation and the introduction of new technologies will impact on the quality of the jobs that will be affected by them rather than merely focusing on their quantity. Labour legislation and collective bargaining must play a much more central role if these phenomena are to take place in a way that respects the human dignity and the fundamental rights of workers – yet, these aspects are still under-researched in the vast debate on automation and the future of work. This contribution wants to fill some of these gaps in this debate. The next Section starts doing so by indicating how some technological innovations can lead to intrusive managerial practices that could magnify these risks.

II. Technologically-Enhanced Workers’ Monitoring: Artificial Intelligence, Big-Data and the Risks of Algorithmic Discrimination

Technological tools and digitalised supervision systems are increasingly used to manage the workforce in modern workplaces.[10] Workers’ surveillance is, of course, nothing new; business historians such as David Landes have long reported that concentration of workers in factories started occurring before mechanisation, to surveil and direct the workforce better than what was possible in processes based on dispersed homework.[11] Fordist-Taylorist business models were also based on extensive monitoring of workers.[12]

Information technology and artificial intelligence,[13] however, allow monitoring workers activities to extents unthinkable in the past, as well as the gathering and processing of an enormous amount of data on these activities.[14] More and more workers, for instance, use wearable work instruments that enable registering of their movements and location minute by minute, also measuring their work pace as well as breaks. Data collected through wearables, including sociometric badges,[15] are often analysed using artificial intelligence to assess workers’ productivity and fitness to execute particular tasks.[16] Wearables are also used or experimented in warehouses and other workplaces to direct workers to their next assignment. Goods in Amazon warehouses, for instance, are stored apparently at random. Amazon workers are guided by technological tools to the next item to pick and process, a system that also enables the company to automatically track and measure the speed and efficiency of every individual worker. Workers who underperform according to the metrics of the automated surveillance systems can receive warnings or see their employment terminated automatically “without input from supervisors.”[17]

GPS systems allow monitoring the position and speed of truck and van drivers as well as of delivery riders and ride-sharing drivers working for on-demand platforms. These systems can also be used to verify, for instance, if these workers gather in specific locations, to prevent or react to collective action.[18] Similar to workers in a warehouse that use automated systems of direction, platform workers are assigned to the next task by the app’s algorithms, which are also designed to measure the speed and diligence of the worker in completing the tasks, including by factoring in the rating and reviews that customers assign to workers. Bad scores or performance below the algorithm’s standards can lead to the exclusion of the worker from the platform and thus to “dismissal,” also made easier by the purported self-employment status of these workers.[19] And this is not confined to tasks “on-the-road.” Workers on online “freelancing marketplaces” and domestic workers who are contracted on platforms to do work in customers’ households live in constant worry over ratings and how the platforms’ algorithms take ratings into account when assigning the next job.[20]

The way these management systems operate is rarely transparent, as companies do not share the methods through which ratings and customers’ feedbacks over the workers’ activities are gathered and processed. Management by the rating is also spreading ever more beyond platform work, with apps that allow processing patrons’ and restaurants’ feedbacks over individual waiters.[21]

Nor should it be assumed that increased forms of surveillance are confined to low-wage or blue-collar jobs. HR practices that make resort to forms of artificial intelligence that facilitate “management-by-algorithm” and “electronic performance monitoring” are also extensively used in white-collar occupations. Electronic performance monitoring (EPM) has been described by Phoebe Moore et al. as including “email monitoring, phone tapping, tracking computer content and usage times, video monitoring and GPS tracking.” According to these researchers, “data produced can be used as productivity indicators; indication of employees’ location; email usage; website browsing; printer use; telephone use; even tone of voice and physical movement during conversation.”[22] These data, coupled with the use of “big data” analytical instruments, also constitute the basis of so-called People Analytics practices. Pioneering legal studies on this topic, conducted by Matthew Bodie, Miriam Cherry et al. define “People Analytics” as:

a process or method of human resources management based on the use of “big data” to capture insights about job performance. The core idea is that unstructured subjective judgment is not rigorous or trustworthy as a way to assess talent or create human resources policies. Instead, data— large pools of objective, generally quantitative data—should form the foundation for decision-making in the HR space.[23]

Data are therefore collected from a vast array of sources.[24] One of the companies at the forefront of these practices, Humanize, reports on its webpage that metadata can be obtained from “email and call timestamps, number of chat messages sent, and duration of meetings can be measured to uncover patterns on how teams actually work.” This does not necessarily mean that the actual content of messages and chats is examined, as the company claims to include “no names or content in the metadata.”[25]

Nonetheless, even if these individual-content data are not collected or are effectively anonymised, collection practices can be highly invasive and aimed at detecting highly personal elements,[26] including the level of interaction with colleagues and even the humour of workers, for instance through the use of so-called “sociometric badges.” These are wearable devices that allow monitoring the location of workers, their movements and also, through the use of incorporated microphones and voice-pitches analysis the mood of workers without actually recording the content of their conversations.[27]

EPM is also being used to monitor workers in telework and smart work arrangements, which allow workers to perform their activities outside of traditional workplaces, and are thus usually associated with higher workers’ autonomy.[28] Companies like Crossover sell systems such as the Worksmart Productivity Tool to monitor teleworkers and other remote workers by taking screenshot of their computers at fixed intervals and collecting additional data, including, as the company’s website explains: “keyboard activity, application usage, screenshots, and webcam photos to generate a timecard every 10 minutes.” This timecard is then shared with the workers and their managers via a “logbook where all of your timecards are displayed and a dashboard summarizes your timecards to show you how you spent your time.”[29] Other companies market web-filtering software, like Interguard, that record and reports on data such as web history and bandwidth utilization “whether the employee is on or off network.”[30]

All these data can also be processed through AI tools that rate workers on various performance metrics. In 2019, for instance, the Guardian reported that dozens of firms in the United Kingdom, including several law firms, employed AI to scrutinize staff behaviour, also to identify “influencers” and “change-makers” in the workforce.[31] Interestingly, this practice is not so new. Cathy O’Neill discussed the case of a company, Cataphora, which in 2008 marketed a system to identify “idea generators” in the workforce by analysing corporate emails and messaging. When the 2008 recession hit, HR managers began to lay-off people starting by those who performed poorly under Cataphora’s metrics. As O’Neill, a mathematician and data scientist, explains these programs risk, among other things, to be highly inaccurate since they are based on limited data.[32]

Business-sponsored wellness programs also use software like Fitbit to track employees’ fitness.[33] This, among other things, can contribute to having access to information related to off-duty activities of workers. Surveillance of workers’ off-duty activities is also nothing new, suffice here to think of the Social Department of Ford,[34] which famously investigated the lifestyles of workers in the motor company. However, the blurring of boundaries between work and life, the constant interconnection with IT devices and digital services such as social networks and technological devices that allow to gather data from individuals’ online and offline conducts makes it possible to accede to a flow and amount of information that is very difficult to quantify and limit in advance. Articles in the press also reported cases of monitoring practices that aimed to prevent fraud by snooping social network activities and statuses.[35]

Personal data gathered on the Internet, also by acceding to information available through social networks is also increasingly used to make hiring decisions,[36] and the practice of asking employees to disclose their social network passwords is also spreading, so that 18 individual states of the United States passed legislation explicitly banning it.[37]

People Analytics and EPM, of course, can sometimes be rooted in genuine business needs such as fostering productivity and raising levels of security, also to the benefit of individual employees. Wearables that analyse fitness data, for instance, can be employed to mitigate health and safety risks, including stress, and to prevent accidents.[38] Workers may also be interested in using systems that help them staying focused on their jobs both when they are on-site and off-site and having their activities recorded accurately so that – if anything goes amiss – they can prove to have acted diligently. Business and workers can also be interested in the prevention of illicit behaviours such as fraud as well as forms harassment that can occur online. Moreover, HR practices such as People Analytics are also grounded in the idea that artificial intelligence can help better manage the workforce by eliminating individual biases of supervisors and replacing them with more objective and neutral metrics.[39] The use of artificial intelligence and other technological tools to supervise working activities, therefore, should not be regarded as necessarily negative.

The practices discussed above, however, can also lead to very severe intrusion into workers’ private life and materially infringe their privacy,[40] by allowing management to access to very intimate information, including, for instance, through the use of data based on medical insurance claims on the intention to become pregnant and on the possibility to develop sickness.[41] Wearables and security cameras, programs that register online and offline activity, as well as take screenshots of computers, can also turn into extenuating practices of endless surveillance. Far from fostering workforce performance, these models can also generate stress as well as adverse reactions and cause sharp declines in efficiency and productivity.[42]

In addition to this, the idea that management-by-algorithm and artificial intelligence can necessarily lead to more objective and bias-free HR practices may prove substantially wrong. The risk is that these systems reflect the biases of their human programmers and only focus on their ideas around productivity and work performance, for instance by discarding or penalising job candidates or workers with disabilities or with features that differ from the expectations programmers have. The scarcity of diversity in tech companies can also exacerbate these phenomena. In an official Opinion on artificial intelligence, the European Economic and Social Committee recently observed: “the development of AI is currently taking place within a homogenous environment principally consisting of young, white men, with the result that (whether intentionally or unintentionally) cultural and gender disparities are being embedded in AI, among other things because AI systems learn from training data.” The Committee warned against the misconception that data is by definition objective. Data, instead, “is easy to manipulate, may be biased, may reflect cultural, gender and other prejudices and preferences and may contain errors.”[43]

The risk, therefore, is that management-by-algorithm and artificial intelligence at the workplace, long from having neutral outcomes and reducing discrimination, could augment discriminatory practices.[44] A vast literature already exists that highlights how algorithm-based decision-making can perpetuate discriminatory practices and marginalisation of vulnerable groups, especially when the collection of data is poor.[45] This form of decision-making is often based on data that reflect past behaviours.[46] If those behaviours were biased, the likelihood that any automated-decision process propagates those biases in the future is very high.[47] Imagine a system of automatic scanning of CVs for hiring or promotion. If this system is built on data about the previous hiring in the company or sector, there is a high chance that it can mimic past recruitment practices. If, in turn, those practices were discriminatory or skewed, they could be perpetuated in the future and, what is worst, this would occur under an “aura” of perceived objectivity usually credited to machines. Nor would it be simple to remove discrimination by merely instructing the algorithms to ignore sensitive data such as gender or race, since sophisticated software could still recognize, and penalize, subjects underrepresented in the previous hiring on the basis of other data. For instance it could use certain types of career breaks and as proxies to recognize women or postcodes or first and last names to identify members of minorities. This risk is even more severe when these practices are based on self-learning artificial intelligence, with software being able to reprogram their own criteria and metrics to reach a very general predefined outcome, such as improving work productivity. The lack of transparency and the risk of dehumanizing work would then be even more exacerbated.

Nor it should it be taken for granted that a one-dimensional vision of productivity and efficiency embedded into artificial intelligence technologies would necessarily lead to better business outcomes. Algorithms are often being used to implement just-in-time work practices that scale the workforce’s figures and shifts by the expected business demand, thus contributing to a casualization of work patterns and job and income instability that goes far beyond the “usual suspects” in the platform economy. A study conducted by various universities on retail workers, for instance, shows that algorithms aimed at fostering business’ efficiency can lead to suboptimal results, as a consequence of these algorithms being based on a very limited notion of efficiency and therefore not be taking into account the many hidden costs associated with schedule instability.[48]

One oft-overlooked dimension of advanced forms of automation is its potential role in introducing technology-enhanced management of workers facilitated by artificial intelligence. A smart-robot is, in the definition proposed by a 2016 EU parliament report, a robot that has the “capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data” and the “capacity to adapt its behaviours and actions to its environment” (Draft Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). Robots that collect the personal data of employees, including by measuring their biological data through interaction with fitness applications and wearables, to enhance productivity or attune the pace or other features of the work to the particular conditions of workers are not impossible to introduce. This is particularly true for co-bots, which, as discussed above, are by definition meant at having a direct physical interaction with human beings and at sharing workspaces with workers.

Moreover, the use of artificial intelligence, management-by-algorithm and People Analytics are, per se, a form of automation of middle-managerial and managerial roles. Managing and disciplining platform workers via workers’ ratings is arguably a way of outsourcing assessment of work performance to customers facilitated by algorithms.[49] EPM has also the potential to increasingly automate core business functions such as HR and also displace the associated clerical occupations, adding to the list of professionals that can be severely affected by automation, together with lawyers and medical doctors.[50]

The implications of these managerial practices, therefore, warrant serious attention by policymakers and scholars and the consequences on privacy, diversity, employment as well as business productivity should be carefully assessed. Even the most well-intentioned measures, including wellness programs, risk turning into forms of dystopian and paternalist control unless a serious reflection on the use of technology at the workplace is carried out.

The paternalism behind EPM is well represented in this statement from the CEO of Awareness Technology, the company that markets Interguard, a monitoring system for on-site and remote workers: “if you are a parent and you have a teenage son or daughter coming home late and not doing their homework you might wonder what they are doing. It’s the same as employees.”[51]

Comparing employees to underage son and daughters is nothing new. In discussing privacy and employers’ managerial prerogatives at the workplace, Matthew Finkin recalls that in 1884 the Tennessee Supreme Court did not object to an employer telling employees where to shop – as a father could order his children where to buy goods, so could employers to their employees.[52] Beyond the irony of finding ancient arguments somehow replicated in the most cutting-edge work scenarios, the possibility of management unduly and excessively compressing workers’ autonomy and privacy is a structural feature of the contract of employment.[53] As scholars Bodie, Cherry et al. point out, unless regulation specifically limits managerial prerogatives, “in the workplace, there is no legal protection against surveillance per se […]. The need for monitoring follows from our legal conception of employment, which is based on control: an employee is one whose work is controlled by her employer” and it is the right of employers to specifically direct employees activities “that separates employees from independent contractors.”[54]

III. The Importance of Labour and Human Rights Protections in Governing Technology at Work

The policy and journalistic discussions on automation have also stirred an extensive debate on universal basic income (UBI).[55] Numerous tech entrepreneurs and companies have maintained that one of the responses to the displacement of jobs caused by automation should be the introduction of UBI, to mitigate the social impact of mass technological unemployment.[56] The debate on UBI is broader than, and goes beyond, these proposals. Several labour advocates have suggested UBI as a progressive policy that would help to face significant challenges in modern labour markets, including technological unemployment and the growth of casualised and unstable forms of employment.[57] This is a very complicated issue that cannot be treated here.[58] What is important to state, however, is that even if a functioning UBI scheme were possible to implement, this would not affect the legal structure of employment contracts and regulation discussed above.

Neoliberal proponents of UBI often take for granted that this measure would substitute for other welfare schemes, including social security. A corollary of this vision is also that, if a UBI were introduced, employment regulation could be rolled back because, in system where everybody had a secure access to income, regulation aimed at supporting workers’ income and remediate against their weak bargaining position would no longer be needed, also because the UBI would likely increase their reservation wages.[59]

These assumptions are in line with conventional accounts of employment regulation and mainstream approaches to employment policy. Indeed, the objective of the flexicurity approach to employment the protection is to replace protection of workers “on the job” with protection “on the market,” by deregulating aspects of employment protection while securing workers’ income through unemployment benefits and active labour market policies.[60]

Policies aimed at substituting protection of employment rights for protection of income risk neglecting an essential feature of employment regulation, which is not just safeguarding workers because they are economically dependent on their employers and have weak bargaining power “on the market,” but is also limiting and rationalizing the unilateral exercise of managerial prerogatives “on the job,” i.e. while they are employed.[61]

Regulation against discrimination, working time regulation protecting the physical and mental health of workers against the risks of fatigue and burnout, rules protecting privacy at the workplace against abusive forms of monitoring, to cite only some of the regulation that limits the exercise of managerial prerogatives cannot be swapped with protection “on the market.” This regulation concerns powers and duties that are functioning during the entire course of the employment relationship and do not merely depend on the superior bargaining power of employers but are also enshrined in legal norms. The idea of replacing labour protection at the workplace with securing the stability of income neglects fundamental aspects of the employment relationship, which warrant regulatory limits aimed at protecting human dignity at the workplace. This is also something to take into account when discussing the possibility of introducing UBI or any other form of income protection – even if UBI schemes were introduced, there would still be need of employment regulation and labour protection “on the job.”

The fundamental features of employment regulation and its ambivalence in granting far-reaching and intensive unilateral managerial powers that can materially compress the workers’ autonomy, on the one hand, and limiting and rationalising those powers, on the other hand, must be particularly heeded in the wake of automation and the increasing use of technological tools to direct the workforce. EPM, People Analytics and the use of artificial intelligence and big data at the workplace magnify the possibility of supervising workers and closely monitoring the performance of working activities. These technologies can enable egregiously invasive practices and lead to arbitrary and discriminatory outcomes. Indeed, these practices can lead to a “genetic variation” of managerial prerogatives, by “upgrading” them to levels unheard of in the past. Constant attention must thus be paid to these developments and regulation is all the more needed to prevent managerial abuses that imperil the human dignity of workers.

To this end, it is also essential to frame workers’ rights in fundamental and human rights discourses. The nature of labour rights as human rights has long been debated[62] and it has also been enshrined in a vast number of international treaties and sources of law.[63] One of the rationales to recognise labour rights as human rights lies precisely on the existence of managerial prerogatives. As discussed above, legal systems vest employers with authority over their workforce that goes beyond social norms and is underpinned by legislation. Limiting and rationalising authority to preserve human dignity – which is one of the essential functions of human rights – is also essential at the workplace.[64] Labour protection, by limiting the exercise of managerial prerogatives, is also crucial to ensure that the authority of employers is not exerted in ways that jeopardise the human rights of workers.

Human rights approach to labour regulation can indeed prove beneficial also concerning the protection of workers’ autonomy and dignity regarding electronic monitoring of their activities.[65] The European Court of Human Rights, for instance, has interpreted the right to private life under article 8 of the European Convention on Human Rights to enshrine the protection of privacy of individuals at the workplace. In a case that concerned the dismissal of a worker for the use of the internet at work for private purposes, in a situation where the employer had access to the content of the workers’ communications via IT tools, the Court established that employers’ monitoring of online activities, while admissible in principle, had to be carried out proportionately, to ensure that arbitrariness and abuses be avoided.[66] Among the safeguards that the Member States have to consider, to determine whether monitoring practices are legitimate, the Court indicated: the circumstance that employees be properly notified of the possibility that the employer might monitor correspondence and other communication; the presence of legitimate reasons to justify monitoring the communications and accessing their content; the possibility to establish less intrusive monitoring practices. The Court also mandated to consider, in general, the extent of the monitoring and the degree of intrusion into the workers’ privacy, making a distinction between access to the metadata covering the flow of communications and access to the content of these communications.

This judgment can provide a general protective framework for workplace relations in countries that adhere to the European Convention on Human Rights of the Council of Europe. Notably, the Council of Europe also recently updated its Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data. The new text of the Convention, after its entry into force, will provide for an individuals’ right “not to be subject to a decision significantly affecting him or her based solely on an automated processing of data without having his or her views taken into consideration.”[67]

For countries that also belong to the European Union, further guidance can be found in the General Data Protection Regulation (GDPR). The GDPR, however, is no panacea in itself against the excesses of management-by-algorithm and the use of AI at the workplace. Firstly, commentators noted that EU law has been interpreted by the Court of Justice of the EU (CJEU) as to provide for lower protection in case a decision is taken based on subjective inferences drawn on data rather than on objective and verifiable facts.[68] This is a paradox, considering the possible detrimental impacts that wrong inferences can cause – imagine if a decision on hiring or promotion is made by inferring how somebody with a particular credit history can perform into an employment contract, without taking into account what factors contributed to that credit history.

Also, the CJEU has so far refused to extend the remit of EU data protection law to the accuracy of decision-making processes. And even the GDPR provisions that seem to provide more meaningful protection in this area could prove insufficient. For instance, Art 22(1) of the GDPR grants for the right not to be subject to “a decision based solely on automated processing,” when this decision produces legal or “similarly significant[…]” effects.[69] Most likely, however, a high number of decisions concerning workplace issues will fall into the exceptions to this rule allowed by Art 22(2), being they “necessary for entering into, or performance of, a contract.”[70] In this case, the GDPR mandates that employers or other data controllers implement “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” Workers, therefore, will have the right to contest fully automated decisions that affect them significantly. This protection, however, will be vain unless they can show that a specific “enforceable legal or ethical decision-making standard” has been violated. Without these standards, the protection under Art. 22 risks remaining “an empty shell.”[71]

It is, therefore, crucial that adequate and specific standards and protections be provided in the world of work. In this respect, Article 88 of the GDPR is a crucial provision. It provides that the EU Member States may introduce, by law or by collective agreements, “specific rules to ensure the protection of the rights and freedoms in respect of the processing of employees’ personal data in the employment context.” These rules shall “include suitable and specific measures to safeguard the data subject’s human dignity, legitimate interests and fundamental rights” with particular regard to “monitoring systems at the work place,” transparency of processing and transfer of personal data.[72]

These regional approaches to workers’ privacy protection, founded on the idea of protection of human and fundamental rights at the workplace, and specifically addressing the need that the prerogatives of managing and monitoring workers do not impinge upon their human dignity, can guide the introduction (or the update) of labour regulation aimed at safeguarding workers against abusive supervision practices in the wake of the spread of technology-enhanced monitoring systems.[73] A human-rights based approach, grounded on the idea that the human right to privacy can only be limited insofar as this is indispensable to the exercise of other human rights and that any limitations must be proportionate to this end, can indeed provide a meaningful general framework of protection that may prove beneficial, in contrast to spot-remedy approaches adopted in systems where recognition of workers’ rights as fundamental rights is still lagging, like the United States,[74] and to proposals to govern technological innovation based on much vaguer “ethical” principles, such as the currently overhyped “ethical AI” discourse.

A human-rights based approach to labour protection cannot neglect the importance of collective rights such as freedom of association and the right to collective bargaining in the protection of human dignity at the workplace. The function of collective rights is not only to give workers a better position to negotiate economic conditions of employment; collective rights also act as “enabling rights,” facilitating securing and effectively enforcing any other right at the workplace. As such, collective rights also serve as a fundamental tool to rationalise and limit the exercise of managerial prerogatives, since they allow counterpoising a collectively organised party to the intrinsic collective and organisational dimension of these employers’ prerogatives, which can be exerted on an individual basis but also on the workforce as a whole. Collective rights, including the right to collective bargaining, allow moving from a purely unilateral exercise of those prerogatives towards a consensual governance of work, by requiring negotiations on aspects of the business organisation that would be, in lack of collective relations, unilaterally governed by employers, by means of the authority vested in them by the legal system.[75] Reference to collective bargaining in Article 88 of the GDPR as a mechanism to provide adequate and specific standards in the context of data collection and processing to safeguard the human dignity and the fundamental rights of workers confirms how crucial collective rights are to counter abuses of automated-management practices at the workplace. The next section concludes this Article by exploring how collective regulation is essential to secure adequate labour protection in times of automation and technologically enhanced monitoring practices.

IV. “Negotiating the Algorithm”: “Human-in-Command” and Collective Rights for the Future of Work

As discussed in the Introduction, the mainstream discourse on automation tends to follow the techno-deterministic assumption that the introduction of new technologies will determine job losses or gains as an autonomous and heterogeneous process impacting labour markets. This approach, nonetheless, does not take into account the role that labour regulation can play to influence this process – something that is indeed surprising, given the high number of international and national instruments that deal with the impact of technology on employment, such as the instruments governing collective dismissals.

Collective dismissals are the subject matter of copious international, regional and national regulation. These instruments commonly require businesses to adequately inform and consult with trade unions and workers’ representatives and to involve public bodies before carrying out mass redundancies. Yet having this type of regulation in place is far from sufficient for solving the problems deriving from automation. Job losses could occur at levels unheard-of in the past, for instance, or new technologies could be introduced at a pace that strains current regulation and industrial relations. Moreover, this regulation aims at mitigating the consequences of redundancies but is not able to avert them per se, especially if new machinery and business processes displace a high number of jobs in a short amount of time. Nonetheless, policymakers, researchers and scholars should not start from the assumption that regulation aimed at attenuating mass job losses does not exist or is impossible to apply. Collective redundancies regulation exists, and its existence should be considered when discussing the impact of automation on labour markets, together with the role that social partners and regulators can have in governing these processes.

Nor should it be assumed that regulation would necessarily stifle innovation, another widespread corollary of techno-deterministic approaches to automation. Collective redundancies regulation and labour laws that ensure functioning industrial relations systems and sustain the role of workers’ representatives and trade unions can instead be associated with positive economic outcomes.[76] Literature also shows a positive relationship between stronger collective institutions and productivity,[77] economic efficiency, and levels of employment.[78]

The assumption should be, therefore, that collective dismissal regulation and workers’ involvement in managing mass redundancies can be beneficial when dealing with automation processes and their social implications.

Moreover, the involvement of workers’ representatives can also occur much earlier than when actual redundancies occur. Duties to engage in social dialogue to deal with the envisaged impact of technological innovation are also provided under regional instruments, such as the EU Directive 2002/14.[79] The Directive mandates information and consultation duties both on an ad hoc basis, “on decisions likely to lead to substantial changes in work organisation or in contractual relations” and, on a regular basis, “on the recent and probable development of the undertaking’s or the establishment’s activities and economic situation.” Examples of national regulation that provide for similar duties are also available.[80]

Most importantly, the involvement of workers’ representatives can prove particularly beneficial to the aim of governing other implications of new technologies at the workplace, namely those affecting the quality of the jobs that will “survive” after automation. The introduction of artificial intelligence and the use of big data and EPM need to be governed, to ensure that systems that can allow an unprecedented magnification of the scope and impact of managerial prerogatives and the intensity of monitoring do not lead to abuses that impinge on the human rights of workers.

Regulation is needed to govern the amount of data collected on working performances and the personal features of workers, as well as the way data are collected and treated. Nor is this only a matter of privacy protection. The way work is directed through the use of new technologies, including wearables and co-bots among other things, should be regulated to ensure that the quest for higher productivity does not result in occupational hazards and heightened stress for the workers involved. Disciplinary mechanisms facilitated by technology are another key item to regulate. Even if it were possible to have artificial intelligence deciding on issues such as increasing the pace of work or intensifying production, these decisions should always be implemented after a human review. The same goes for any disciplinary measure taken in light of data collected through mechanical monitoring systems or algorithmic processes. Algorithm-based evaluation of work performance should also be disciplined, to make assessment criteria transparent and known to workers and to ensure avoidance of arbitrary or discriminatory outcomes. To this end, again, even if it were possible to have automatic changes and updates in the operation of algorithms through self-learning artificial intelligence, the final decision to amend the criteria through which work performance is assessed should be taken by humans, made transparent and known to workers and also be subject to negotiation.

“Human-in-command,” an approach advocated by the European Economic and Social Committee’s Opinion on Artificial Intelligence,[81] namely the “precondition that the development of AI be responsible, safe and useful, where machines remain machines and people retain control over these machines at all times” should be strictly followed also concerning work. The Opinion also specifically advocates that “workers must be involved in developing these kinds of complementary AI systems, to ensure that the systems are useable and that the worker still has sufficient autonomy and control (human-in-command), fulfilment and job satisfaction.” To fulfil this objective, it is also crucial that any managerial decision suggested by artificial intelligence be subject to review by human beings who remain legally accountable, together with their organisation, for the decision and its outcomes. The fact that decisions were taken following machine-based processes should never be a sufficient reason to exclude personal liability; even if electronic personality were introduced in the legal system, humans should always remain accountable for any decision directly affecting workers and any other natural person.

The right not to be subject to fully automated decision-making without human intervention is making its way in supranational regulation. Article 9 of the revised Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data concerning the right not to be subject to automated decision-making without human intervention, discussed above, together with the provision of the GDPR providing for adequate safeguards in this respect are a step towards establishing a “human-in-command” approach. As argued in the previous section, however, to avoid these provisions remain an empty shell, when it comes to the world of work, specific and adequate standard and regulation are needed in this field.

This regulation will have to remain flexible and quickly adaptable to technological innovation. For this reason, besides a general default legislative framework, detailed and bespoke regulation is essential. In this regard, collective bargaining can play a primary role both at the sectoral and at the workplace level, as recalled in Article 88 of the GDPR. Individual right to access to data and to contest the outcomes of automated decision-making, while essential, could not be sufficient in a context in which technology becomes as pervasive and complex as discussed in the previous sections. Individuals should not be left alone to cope with the intricacies of this technology when they want to comprehend and contest the consequences of its applications on them.

For this reason, in the world of work, collective rights and voice will be crucial. Collective agreements could address the use of digital technology, data collection and algorithms that direct and discipline the workforce, ensuring transparency, social sustainability and compliance with these practices with regulation. Collective negotiation would also prove pivotal in implementing the “human-in-command” approach at the workplace. Collective bargaining could also regulate issues such as the ownership of the data collected from workers and go as far as creating bilateral or independent bodies that would own and manage some of the data.[82] All this would also be consistent with collective bargaining’s fundamental function as an enabling right and as a rationalisation mechanism for the exercise of employers’ managerial prerogatives, allowing moving away from a purely unilateral dimension of work governance.

“Negotiating the algorithm” should, therefore, become a central objective of social dialogue and action for employers’ and workers’ organisation. In 2017, for instance, the UNI Global Union issued a series of cutting-edge proposals on Ethical Artificial Intelligence at the Workplace.[83] Armaroli and Dagnino and Phoebe Moore et al., moreover, report on several collective agreements already in place in various countries that regulate the use of technology not only in monitoring workers but also in directing their work, to protect human dignity and occupational health and safety of workers.[84] In this respect, Seifert also envisages a potentially crucial role for transnational collective bargaining and reports on transnational agreements already concluded on the issue of data protection.[85] Social partners, therefore, are already tackling these issues.[86] Governments also have an essential role to play, in addition to providing a general legislative framework to regulate these issues in lieu of, or complementing, specific collective bargaining. For instance, they can also use fiscal incentives to stimulate technological business strategies on the condition that they fully integrate sustainability objectives and are subject to social dialogue. It will not be a simple process or a quick one, and it will require efforts from all the parties involved. Among other things, substantial resources will need to be spent to ensure that workers, managers, trade unionists and HR personnel be adequately trained to deal with the challenges and opportunities that technology can prompt. Regulation and collective governance of these processes will not be built in a day. However, they are indispensable to ensure that the benefits of technological advancements improve our societies inclusively and as a whole.


* BOF-ZAP Professor of Labour Law at KU Leuven, the University of Leuven. This article draws on the article ‘Negotiating the Algorithm’: Automation, Artificial Intelligence and Labour Protection published in a special issue of the Comparative Labor & Polocy Journal on “Automation, Artificial Intelligence, and Labour Protection” guest-edited by me (41 Comp. Lab. L. & Pol’y (2019)). This article and the special issue were published within the framework of the Odysseus grant “Employment rights and labour protection in the on-demand economy” that I received from the FWO Research Foundation – Flanders.

[1] Carl Benedikt Frey & Michael A. Osborne, The future of employment: How susceptible are jobs to computerisation? (2013), https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf.

[2] See, for instance, the well-known paper of Carl Benedikt Frey & Michael A. Osborn, The Future of Employment: How Susceptible are Jobs to Computerisation, (Oxford Martin School, Sept. 17, 2013), https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf For an in-depth discussion on manufacturing processes, see Wolfgang Dauth, Sebastian Findeisen, Jens Südekum & Nicole Wößner, German Robots – The Impact of Industrial Robots on Workers (IAB Discussion Paper 30/2017, Oct. 2, 2017),  http://doku.iab.de/discussionpapers/2017/dp3017.pdf;.

[3] The literature on the topic is already enormous. See David Autor, Why Are There Still So Many Jobs? The History and Future of Workplace Automation, Journal of Economic Perspectives, Summer 2015, at 3; OECD, The Risk of Automation for Jobs in OECD Countries A Comparative Analysis (OECD Social, Employment and Migration Working Papers, No. 189, May. 14, 2016); OECD, Automation, skills use and training (OECD Social, Employment and Migration Working Papers, No. 202, Mar. 8, 2018); See, for a general critical discussion, David Kucera, New automation technologies and job creation and destruction dynamics, (ILO Employment Policy Brief, May 12, 2017); For an in-depth legal discussion, see Cynthia Estlund, What Should We Do After Work? Automation and Employment Law, 128 Yale L.J. 254 (2018)

[4] An exception is Eurofound, Game changing technologies: Exploring the impact of production processes and work (Research Report, 2018).

[5] McKinsey Global Institute, A Future that Works: Automation, Employment and Productivity (Jan. 2017).

[6] See below Section II.

[7] Pav Akhtar, Phoebe Moore & Martin Upchurch, Digitalisation of Work and Resistance, in Humans and Machines at Work: Monitoring, Surveillance and Automation in Contemporary Capitalism (Phoebe Moore, Martin Upchurch & Xanthe Whittaker eds., 2018). See also the articles of Antonio Aloisi & Elena Gramano, Artificial Intelligence Is Watching You at Work. Digital Surveillance, Employee Monitoring and Regulatory Issues in the EU Context, 41 Comp. Lab. L. & Pol’y (2019).

[8] Frank Pasquale, The Black Box Society. The Secret Algorithms That Control Money and Information (2015); Emanuele Dagnino, People Analytics: lavoro e tutele al tempo del management tramite big data, 3 Labour & Law Issues 1 (2017).

[9] See below Section IV.

[10] Phoebe Moore, Martin Upchurch & Xanthe Whittaker, Humans and Machines at Work: Monitoring, Surveillance and Automation in Contemporary Capitalism (2018); Ifeoma Ajunwa, Kate Crawford & Jason Schultz, Limitless worker surveillance, 105 Cal. L. Rev. 102 (2017).

[11] David Landes, The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present (1969).

[12] Kathrin V.W. Stone, From Widgets to Digits: Employment Regulation for the Changing Workplace (2004).

[13] The term “artificial intelligence,” in this paper, is used as a reference to the so-called “narrow artificial intelligence” or “weak artificial intelligence,” namely the artificial intelligence used to performed a single task, such as – as a commonly used description goes – “playing chess or Go, making purchase suggestions, sales predictions and weather forecast.” This is the only type of artificial intelligence that exists, nowadays. Even self-driving cars are considered merely a sum of several narrow AIs, and the same applies to online translation engines. Narrow AI is commonly opposed to “General AI,” i.e. “the type of Artificial Intelligence that can understand and reason its environment as a human would,” which has not been developed yet. The direct citations are from Ben Dickson, What is Narrow, General and Super Artificial Intelligence,  TechTalks, May. 12, 2017, https://bdtechtalks.com/2017/05/12/what-is-narrow-general-and-super-artificial-intelligence/; For a broader discussion of the distinction between “strong” and “weak” AI, see Jerry Kaplan, Artificial Intelligence: What Everyone Needs to Know (2016).

[14] Emanuele Dagnino, supra note 8.

[15] See below in this Section.

[16] Pav Akhtar, Phoebe Moore, & Martin Upchurch, supra note 7; Ivan Manokha, Why the rise of wearable tech to monitor employees is worrying, The Independent, Jan. 4, 2017, https://www.independent.co.uk/life-style/gadgets-and-tech/why-the-rise-of-wearable-tech-to-monitor-employees-is-worrying-a7508656.html.

[17] Colin Lecher How Amazon automatically tracks and fires warehouse workers for ‘productivity’, The Verge, Apr. 25, 2019 https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations. The article reports: “Amazon says supervisors are able to override the process.” See also Chris Baraniuk, How Algorithms Run Amazon’s Warehouse, BBC Future, Aug. 18, 2015, http://www.bbc.com/future/story/20150818-how-algorithms-run-amazons-warehouses.

[18] Valerio De Stefano, The rise of the ‘just-in-time workforce’: On-demand work, crowdwork and labour protection in the ‘gig-economy, 37 Comp. Lab. L. & Pol’y J 471 (2016).

[19] Antonio Aloisi, Commoditized workers: Case study research on labour law issues arising from a set of ‘on-demand/gig economy’ platforms, 37 Comp. Lab. L. & Pol’y J. 653 (2016).

[20] Foundation for European Progressive Studies (FEPS), Work in the European Gig-Economy (2017).

[21]Caroline O’Donovan, An Invisible Rating System At Your Favorite Chain Restaurant Is Costing Your Server, BuzzFeed News, Jun. 21, 2018; Whitney Fillon, How Rating Your Server Is Making Their Life Miserable, Eater, Jun. 22, 2018 https://www.eater.com/2018/6/22/17492528/tablets-restaurants-surveys-score-servers

[22] Pav Akhtar, Phoebe Moore & Martin Upchurch, supra note 7.

[23] Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, The Law and Policy of People Analytics, 88 U. Colo. L. Rev. 961.

[24] For a thorough review carried out by a public authority of common EPM practices see Article 29 Data Protection Working Party (now, the European Data Protection Board), Opinion 2/2017 on data processing at work, adopted on 8 June 2017.

[25] Humanize, https://www.humanyze.com.

[26] According to the Article 29 Data Protection Working Party (now, the European Data Protection Board), supra note 24: “The risk is not limited to the analysis of the content of communications. Thus, the analysis of metadata about a person might allow for an  equally  privacy-invasive detailed monitoring of an individual’s life  and behavioural patterns.”

[27] Kai Fischbach, Peter A. Gloor, Casper Lassenius, Daniel Olguin Olguin, Alex Sandy Pentland, Johannes Putzke & Detlef Schoder. Analyzing the Flow of Knowledge with Sociometric Badges (COINs, 2009), http://www.ickn.org/documents/COINs2009_Fischbach_Gloor_Lassenius_etc.pdf.

[28] The workplace of the future, The Economist, Mar. 28, 2018, https://www.economist.com/news/leaders/21739658-artificial-intelligence-pushes-beyond-tech-industry-work-could-become-faireror-more; Olivia Solon, Big Brother isn’t just watching: workplace surveillance can track your every move, The Guardian, Nov. 6, 2017, https://www.theguardian.com/world/2017/nov/06/workplace-surveillance-big-brother-technology?CMP=share_btn_tw.

[29] Crossover, https://www.crossover.com/worksmart/#worksmart-productivity-tool.

[30] Interguard, https://interguardsoftware.com/web-filtering.html.

[31] Robert Boot, UK businesses using artificial intelligence to monitor staff activity, The Guardian, Apr. 7, 2019, https://www.theguardian.com/technology/2019/apr/07/uk-businesses-using-artifical-intelligence-to-monitor-staff-activity

[32] Cathy O’Neill, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016).

[33] Ifeoma Ajunwa, Kate Crawford & Jason Schultz, supra note 10.

[34] Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, supra note 23.

[35] Olivia Solon, supra note 28.

[36] Emanuele Dagnino, supra note 8.

[37] Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, supra note 23.

[38] The workplace of the future, supra note 28.

[39] Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, supra note 23.

[40] 2 Frank Hendrickx, Privacy en elektronisch toezicht, in Arbeidsrecht (Frank Hendrickx ed., 2015).

[41] Ifeoma Ajunwa, Kate Crawford & Jason Schultz, supra note 10.

[42] Pav Akhtar, Phoebe Moore & Martin Upchurch, supra note 7.

[43] European Economic and Social Committee, Artificial intelligence – The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society (own-initiative opinion), May 31, 2017, JO C 288, 31.8.2017, p. 43.

[44] Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, supra note 23.

[45] Frank Pasquale, supra note 8; Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (2018).

[46] Cathy O’Neill, supra note 32

[47] Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018).

[48] Sarah Adler-Milstein, Dylan Bellisle, Peter J. Fugiel, Meghan Jarpe, Saravanan Kesavan, Susan J. Lambert, Lisa McCorkell, Lori Ann Ospina, Pradeep Pendem, Erin Devorah Rapoport & Joan Williams, Stable Scheduling Increases Productivity and Sales (The Stable Scheduling Study, 2018), http://worklifelaw.org/projects/stable-scheduling-study/report/. See also the discussion of automated scheduling in Janine Berg, Protecting Workers in the Digital Age: Technology, Outsourcing and the Growing Precariousness of Work, 41 Comp. Lab. L. & Pol’y (2019).

[49] Valerio De Stefano, supra note 18.

[50] See also Jerry Kaplan, supra note 13.

[51] Cited by Olivia Solon, supra note 28.

[52] Matthew Finkin, Article 7: Privacy and Autonomy, 21 Em. Rts. & Emp. Pol’y J. 589 (2017).

[53] Frank Hendrickx, Employment privacy, in Comparative labour law and industrial relations in industrialized market economies (Roger Blanpain ed., 2014).

[54] Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, supra note 23.

[55] Angelo Romano & Andrea Zitelli, Il reddito di base è una cosa seria, Valigia Blu, Mar. 7, 2017, https://storie.valigiablu.it/reddito-di-base.

[56] Jathan Sadowski, Why Silicon Valley is embracing universal basic income, The Guardian, Jun. 22, 2016, https://www.theguardian.com/technology/2016/jun/22/silicon-valley-universal-basic-income-y-combinator.

[57] See, for instance, Guy Standing, A Precariat Charter From Denizens to Citizens (2004); Tim Hollo, Can less work be more fair? A discussion paper on Universal Basic Income and shorter working week (The Green Institute, 2016).

[58] See, however, Brishen Rogers, Basic Income and the Resilience of Social Democracy, 41 Comp. Lab. L. & Pol’y (2019) and the other articles dealing with UBI published in that same Journal’s issue.

[59] Matt Zwolinski, The Pragmatic Libertarian Case for a Basic Income Guarantee, Cato-Unbound, Aug. 4, 2014, https://www.cato-unbound.org/2014/08/04/matt-zwolinski/pragmatic-libertarian-case-basic-income-guarantee. Janine Berg, supra note 48, also dismisses the idea that a UBI could adequately substitute for employment protection.

[60] Silvana Sciarra, EU Commission Green Paper ‘Modernising labour law to meet the challenges of the 21st century, 36 ILJ 375 (2007).

[61] Valerio De Stefano, A Tale of Oversimplification and Deregulation: The Mainstream Approach to Labour Market Segmentation and Recent Responses to the Crisis in European Countries, 43 ILJ 253 (2014).

[62] Colin Fenwick & Tonia Novitz, Human Rights at Work: Perspectives on Law and Regulation (2010); Harry Arthurs, Who’s afraid of globalization? Reflections on the future of labour law, in Globalization and the Future of Labour Law, (John D.R. Craig & S. Michael Lynk eds., 2006); Virginia Mantouvalou, Are Labour Rights Human Rights?, 3 ELLR 151 (2012).

[63] George Politakis, protecting labour rights as human Rights: Present and Future of International Supervision (ILO, 2007).

[64] For an extensive discussion of how protection of the human dignity and human rights of workers can be posed as a fundational element of labour law, see the contributions collected in Phylosophical Foundations of Labour Law (Hugh Collins, Gillian Lester, and Virginia Mantouvalou eds. 2019). For an in-depth critical appraisal of human-rights based arguments in labour-law discourses, see, however, Mattew Finkin, Worker Rights as Human Rights: Regenerative Reconception or Rhetorical Refuge?, in Research Handbook on Labour, Business and Human Rights Law (Janice Bellace and Beryl ter Haar eds. 2019)

[65] See Frank Hendrickx, Article 7. Protection of Private and Family Life and Article 8. Protection of personal data, in The Charter of Fundamental Rights of the European Union (CFREU) and the Employment Relation (Stefan Clauwaert, Filip Dorssemont, Klaus Lörcher & Mélanie Schmitt eds., 2019).

[66] Bărbulescu v. Romania, No 61496/08, ECHR 2017.

[67] Article 9 of the revised Convention.

[68] Brent Mittelstadt & Sandra Wachter, A Right to Reasonable Interferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, Colum. Bus. L. Rev. 2019(2), 494–620, https://journals.library.columbia.edu/index.php/CBLR/article/view/3424.

[69] For an in-depth account of the potential shortcomings of Article 22, see Luciano Floridi, Brent Mittelstadt & Sandra Wachter, Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, 7 IDPL 76 (2017); A critical question will concern the interpretation of the word “solely” in this context. Adequate standards are needed to ensure that nominal involvement of humans that sanction decisions made by automatic mechanisms will not deprive data subjects of the protection under Art 22.

[70] Another case of exception is when data subjects give their express consent to solely automated decision-making. It is worth noting, however, that the Article 29 Data Protection Working Party (now, the European Data Protection Board) in its Opinion 2/2017 on data processing at work, adopted on 8 June 2017, observed: “consent is highly unlikely to be a legal basis for data processing at work, unless employees can refuse without adverse consequences.”

[71] Brent Mittelstadt & Sandra Wachter, supra note 69.

[72] Article 88, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation); see, for initial comments, Ilaria Armaroli & Emanuele Dagnino, A Seat at the Table: Negotiating Data Processing in the Workplace. A National Case Study and Comparative Insights, 41 Comp. Lab. L. & Pol’y (2019); Frank Hendrickx, Privacy, data protection and measuring employee performance. The triggers from technology and smart work, (Regulating for Globalization. Trade, Labor and EU Law Perspectives, Mar. 21, 2018), http://regulatingforglobalization.com/2018/03/21/privacy-data-protection-and-measuring-employee-performance-the-triggers-from-technology-and-smart-work; Federico Fusco, Employee Privacy in the Context of EU Regulation N.2016/679: Some Comparative Remarks (paper presented at the XVI International Conference in Commemoration of Professor Marco Biagi, Mar. 2018).

[73] Frank Hendrickx, supra note 72.

[74]  For an analysis of the United States’ legal framework in this context, see Frank Pasquale, supra note 8. See also Ifeoma Ajunwa, Kate Crawford & Jason Schultz, supra note 10; Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, supra note 23; Frank Hendrickx, supra note 53.

[75] Stefano Liebman, Individuale e collettivo nel contratto di lavoro (1993).

[76] Zoe Adams, Louise Bishop, Simon Deakin, Colin Fenwick, Sara Martinsson Garzelli, Giudy Rusconi, The Economic Significance of Laws Relating to Employment Protection and Different Forms of Employment: Analysis of a Panel of 117 Countries, 1990-2013, Int. Lab. Rev. (2018) https://doi.org/10.1111/ilr.12092

[77] Simon Deakin, Colin Fenwick, Prabirjit Sarkar (2014) Labour law and inclusive development: the economic effects of industrial relations laws in middle income countries, in M. Schmiegelow (ed.) Institutional Competition between Common Law and Civil Law: Theory and Policy (Michèle and Henrik Schmiegelow eds. 2014); Felix FitzRoy; Kornelius Kraft, Co-determination, efficiency and productivity, 43 British Journal of Industrial Relations 233 (2005).

[78] Simon Deakin, Jonas Malmberg, Prabirjit Sarkar, P., How do labour laws affect unemployment and the labour share of national income? The experience of six OECD countries, 1970– 2010, 153 International Labour Review 1 (2014).

[79] Directive 2002/14/EC of the European Parliament and of the Council of 11 March 2002 establishing a general framework for informing and consulting employees in the European Community.

[80] Swedish Employment (Co-Determination in the Workplace) Act (1976:580), Section 19, for instance, binds employers “to regularly inform an employees’ organisation in relation to which [they are] bound by collective bargaining agreement as to the manner in which the business is developing in respect of production and finance and as to the guidelines for personnel policy.” Analogous duties are provided also when the employer is not bound by a collective agreement.

[81] European Economic and Social Committee, Artificial intelligence – The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society

(own-initiative opinion No. 7, 2017). See now also ILO Global Commission on the Future of Work, Work for a Brighter Future (2019).

[82] Information and consultation and collective negotiation on data collection and processing are also recommended under the 1997 ILO Code of practice on the protection of workers’ personal data. See also Sangeet Paul Choudary, The architecture of digital labour platforms: Policy recommendations on platform design for worker well-being (ILO Future of Work Research Paper Series, No. 3 2018).

[83] Global Union Sets new Rules for the Next Frontier of Work—Ethical AI and Employee Data Protection (UNI Global Union, Dec. 11, 2017), http://uniglobalunion.org/news/global-union-sets-new-rules-next-frontier-work-ethical-ai-and-employee-data-protection.

[84] Phoebe Moore, Martin Upchurch & Xanthe Whittaker, supra note 10; Ilaria Armaroli & Emanuele Dagnino, supra note 72.

[85] Achim Seifert, Employee Data Protection in the Transational Company, in Game Changers in Labour Law: Shaping the Future of Work, Bullettin of Comparative Labour Relations No. 100 (Frank Hendrickx & Valerio De Stefano eds., 2018).

[86] Recently, the OECD also adopted a recommendation calling for social dialogue to play a role about the introduction and use of artificial intelligence at work. See Organization for Economic Co-operation and Development (OECD), Recommendation of the Council on Artificial Intelligence (2019)