WHEN YOUR BOSS COMES HOME
[➡︎ read the rest of the symposium]
The Future of Work is here: and it has arrived much more swiftly than anyone could have imagined, accelerated by the global spread of Covid-19. In jurisdictions across the world, workers are forced – wherever possible – to work from home. Until fairly recently, debates about AI and the future of work seemed mostly to focus on job loss and substitution. Will jobs be replaced by emerging technologies?
In this article, I focus on another, at least equally pressing, question: at least some of the changes which this latest wave of automation will bring to the world of work require a fundamental rethink of key elements of the traditional apparatus of employment law and labour market regulation. This is not, however, due to the much-vaunted rapid displacement of employment and the consequent need to tackle mass technological employment. Instead of taking away workers’ jobs, I suggest, advances in AI-driven decision-making will first and foremost change their managers’ daily routines, augmenting and eventually replacing human day-to-day control over the workplace: we are witnessing the rise of the ‘algorithmic boss.’ A technology which was still in its relative infancy even some months ago has received a massive boost as a result of new working from home rules, with employers scrambling to find ways of controlling workers away from the traditional workplace, monitoring behaviour, and measuring productivity.
Discussion is structured as follows. Section two looks at the automation of decision making across the lifecycle of the employment relationship. A number of case studies in section four including both start-ups and well-established operators demonstrate how AI has come to augment, or even replace, traditional management in the exercise of the full range of employer functions, from digital reputation screening and CV filtering to on-going job instructions, performance monitoring, and termination decisions. This is not merely a return to (digital) Taylorism: both the kinds of data considered and the probabilistic patterns relied upon in machine learning fundamentally differ from the traditional management structures around which employment law has been designed. The resulting regulatory challenges sit at the heart of subsequent discussion: following the three fault lines, analysis will first explore the implications of new forms of data collection and organisation for privacy and data protection, before turning to the implications of AI processing and control, including ex post facto explicability and the cross-jurisdictional scaling of successful machine learning algorithms. It is here that we encounter genuinely novel questions: the large body of scholarship exploring the ascription of employer responsibility has always proceeded on the basis that the issues at stake are legal ones – whether in sham contracting or (ab-)uses of corporate personality – and thus, at least in principle, amenable to legal solutions drawing on the same bodies of doctrine. The diffusion of responsibility inherent in AI decision making, on the other hand, is ultimately as much a technical challenge as it is a legal one. A brief conclusion highlights the importance of regulatory agency in shaping the development of algorithmic management.
2. Black Box Boss
In a remarkably prescient note, David Autor in 2001 explored the consequences of ‘wiring the labour market’. Rather than bringing about mass unemployment, however, it appears that the immediate consequence of automation has been a ‘(re-)wiring of the firm’: as the cost of data collection and processing continue to fall, employers are increasingly able to deploy technology to monitor – and control – the workplace to a hitherto unimaginable degree.
What does this mean in practice? Ben Waber, CEO of one of the first start-ups active in the field, has written extensively about the rise of ‘people analytics,’ viz ‘how sensing technology and big data about organizations in general, can have massive effects on the way companies are organized. From changing the org chart to changing coffee areas, no aspect of organizations will be untouched by the widespread application of this data.’ The impact of data-driven Human Resource Management, he argues, will by no means be limited to large corporations:
The people analytics system would essentially be ‘management in a box’ for small business … with only a few sensors and some basic programs, [they] could get automated help setting up their management structure and generating effective collaboration patterns. They could even receive feedback on their progress [… as well as] automated suggestions on org structure, compensation systems, and so on.
The underlying trends identified in Waber’s work are quickly becoming pervasive. As early as 2015, the Economist Intelligence Unit highlighted ‘explosive big data IT growth’ in HR, identifying ‘major investments in IT capabilities to support workforce analytics/planning’. Covid-19 will only help to accelerate these trends.
The first, and perhaps starkest, illustration of algorithmic management could be seen in the gig economy, with platforms relying on sophisticated rating mechanisms to manage their workforce. Designed, at first glance, to provide consumers and workers with accurate feedback about other platform providers, it quickly became apparent that ratings had little informational value, given their clustered distribution. Instead, as Tom Slee has argued, reputation algorithms were designed to exercise control over platforms’ workforces, operating as
. . . a substitute for a company management structure, and a bad one at that. A reputation system is the boss from hell: an erratic, bad-tempered and unaccountable manager that may fire you at any time, on a whim, with no appeal.
Rather than merely signalling quality, then, the real point of rating algorithms in the gig economy was to exercise employer control in myriad ways. Platform-based work thus served as an early laboratory for the development of algorithmic management tools. Today, on the other hand, it has spread across industries and workplaces.
Start-ups and established software providers compete in offering software that promises to support, and potentially automate management decision-making across all dimensions of work, including the full socio-economic spectrum of work places, as well as the entire life cycle of the employment relationship: whether it is in factories or offices, universities or professional services firms, the exercise of employer functions from hiring and managing workers through to the termination of the employment relationship can already be automated.
When it comes to the inception of the employment relationship, for example, AI-driven software now allows prospective employers to conduct extensive screening of applicant’s online presence, Software provider FAMA promise to screen workers’ online presence in unprecedented breadth and depth:
Standard background checks don’t catch everything they should. While traditional checks help verify important information, few screening methods can ensure that current and future employees are aligned with your mission and values. Even fewer can predict whether they’ll exhibit toxic behavior. As sexual harassment, bigotry, and other workplace issues move to the forefront of our society, companies that rely on standard background checks risk brand damage and lost authenticity. Fama brings compliant, AI-based employment screening to help you create a productive, welcoming workplace and get you the information you need.
The deployment of recruitment algorithms is not limited to background screening: the entire process, from analysing CVs through to ranking candidates, making offers, and determining salary levels can be automated – and increasingly is, with sometimes deeply problematic consequences: in early 2019, media reports suggested that Amazon had been forced to abandon its automated recruitment tool after the machine learning algorithm had begun systematically to reject female applicants for engineering roles within the firm.
Once employees are hired, they might find themselves under the watchful eye of the algorithmic boss: the day-to-day management of the enterprise-internal market (another core employer function) can similarly be automated to a surprising degree.
Digital and physical information is gathered to analyse and ‘uncover informal communication networks. These communication networks are fundamental to understanding how work gets done on your team and within your organization.’ Management ‘no longer have to rely on surveys or observations to understand what’s working (and what’s not). [Humanyze] metrics quantify the previously un-measurable factors for team success, like collaboration and communication, that are essential for productivity and performance.’
Workforce analytics software, finally, can even be relied upon in exercising the employer’s power of terminating the employment relationship. When faced with allegations of retaliatory dismissals in response to concerted trade union activity in one of its warehouses, Amazon revealed the extensive use of algorithmic management: the claimant’s employment had been terminated for a lack of productivity, as determined by a neutral algorithm. Local warehouse management, the company’s defence asserted, had had no input, control, or understanding of the details of the system deployed.
Present space limitations prohibit a further exploration of how the exercise of the full range of employer functions can – and has – become automated through the advent of people analytics. The picture emerging from the rich literature on point is clear: management automation enables the exercise of hitherto impossibly granular control over every aspect of the working day. This, however, is not merely a return to (digital) Taylorism: the kinds of data considered, the probabilistic patterns relied upon in machine learning, and new forms of exercising control all differ fundamentally from the traditional management structures around which employment law has been designed.
A combination of real-time data collection and machine-learning analysis allows employers to monitor and direct their workforce on a continuous basis – whilst dispersing responsibility to algorithms. Driven by unpredictable and fast-evolving parameters, management decisions become difficult to record, or even explain. The remaining sections of this article explore the ensuing control/accountability paradox, looking first at the concentration of control, before turning to the diffusion of responsibility, in order to develop the three fault lines of algorithmic management.
The first element in the rise of people analytics is the gathering of hitherto unimaginable quantities of Data: fine-grained information about individual employees. There are three broad sources of data in the modern workplace: digital information, sensors, and a growing trend of employee self-tracking. As regards digital information, first, a large number of providers offer software solutions that allow employers to capture employees’ digital activities, from key stroke logs through to screenshots taken at regular (yet random) intervals. Information about phone calls, emails, and other communication channels can similarly be recorded. Even where the actual substance of such communications is not disclosed or analysed, so-called ‘metadata’ (for example, the duration and frequency of calls between specific individuals, or the size and timing of email attachments sent to external recipients) can easily be captured.
In addition to these digital crumbs, increasingly sophisticated sensors allow the capture of physical information: Uber famously pioneered the use of its drivers’ iPhones to measure how quickly individuals accelerate and/or break, thus capturing smooth and abrupt driving patterns. Surveillance, crucially, is not limited to employer-imposed monitoring: whether through the use of fitness trackers or health-apps on our telephones, there is an increasing trend of self-monitoring or self-tracking, the results of which can easily be combined with data gathered in the workplace.
In addition to the sheer quantity of information that can be captured, the reliance on these sources raises two further concerns: first, that the traditional boundary between the workplace and individuals’ private lives is rapidly breaking down. Information about an individuals’ weekend activities can easily be combined with measures of Monday morning productivity, revealing patterns far beyond traditional employer concerns. Second, even where information is collected and stored in anonymised form, as information is increasingly organised in machine-readable formats, data sets from different sources can – at least in principle and subject to data processing consent and privacy laws in jurisdictions such as the European Union – easily be combined to build large employee databases, and – again, at least in principle – quickly identify individuals within a firm.
Recording and organising large amounts of data in and of itself is not enough, however: key to the rise of People Analytics is the availability of increasingly powerful tools to process and analyse what has been captured. The rise of Artificial Intelligence in general, and Machine Learning in particular, has become the object of intense discussion in legal and policy debates beyond the scope of the present enquiry. It is important to note that (domain-specific) Artificial Intelligence is not a novel concept, or even new term. Historically, however, the technology was mostly restricted to so-called ‘expert systems,’ where a series of decisions were coded into a complex decision-tree.
More recently, the advent of large data sets and precipitous drops in the cost of processing power have fuelled the rise of machine learning – probabilistic analyses of large datasets, relying on sophisticated statistical modelling to spot patterns or correlations in the data. This is a crucial step away from our traditional understanding of algorithms: Machine learning is designed to rely on a constant evolution and redefinition of parameters – algorithmic control is no longer just confined to experiences taught through training data sets and pre-programmed analytical routines. The results are ever-changing decision structures: as increasing amounts of data are collected about individual employees and every aspect of their working lives scrutinised on an on-going basis, the factors considered relevant for key metrics such as productivity or innovation will continue to change.
In a first wave of People Analytics, the emphasis was on augmenting managerial decision-making power: machine learning algorithms would scour big data sets for important insights into the workplace, from the arrangement of physical spaces to productive and unproductive team behaviours, and then provide the automation to management in order to inform their choices.
At least from a technical perspective, however, there is nothing inherent in the capabilities of such software to limit itself to informing traditional managers: in principle, at least, their actual decisions can be fully automated. Amazon’s Baltimore warehouse, discussed above, is a case study in point:
Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors […] If an associate receives two final written warnings or a total of six written warnings within a rolling 12-month period, the system automatically generates a termination notice.
The use of algorithmic management to sanction workers was pioneered in the gig economy, with platforms’ keen to detect and prevent any ‘gaming’ of their systems by individuals: [D]rivers are penalized for rejecting lower paid work in favo[u]r of higher paid work, which is illustrative of another constraint on their “freedom” as independent entrepreneurs.’ For some time, Uber also instigated brief deactivation periods of up to 10 minutes as an immediate sanction for a driver’s repeated refusal to accept unprofitable rides.
* * *
Taken together, the increasing availability of data, sophisticated machine learning processing, and algorithmic control, are key ingredients in a fundamental change which is not merely on the horizon as a distant future vision, but already becoming reality in workplaces across the socio-economic spectrum, as the warehouse example, above, demonstrates. The algorithmic boss can hover over each worker like a modern-day Panoptes, the all-seeing watchman of Greek mythology: from vetting potential entrants and assigning tasks, to controlling how work is done and remunerated, and sanctioning unsatisfactory performance—often without any transparency or accountability. As US District Judge Chen put it, citing Michel Foucault, ‘a state of conscious and permanent visibility . . . assures the automatic functioning of power’.
From a legal perspective, this dramatic increase in control might at first be thought to be welcome: most employment law systems place significant emphasis on control and/or subordination as a key factor in determining when a relationship should come within the scope of protective norms. At the same time as dramatically concentrating employer control, however, key elements of algorithmic management can also be relied upon to diffuse responsibility: questions as to who should be liable – the employing enterprise? The designers of the software? The providers of contaminated training data? – can no longer necessarily be tackled with the traditional tools of employment law. This is the fundamental technical challenge in the rise of people analytics.
The scope of employment law has been a vexed question for decades: in most legal systems, control and subordination are central criteria in the definition of the employee (who enjoys legal rights and protection), her employer (who is subject to the corresponding duties), and the contract of employment between them. Drawing on Coase’s Theory of the Firm, Deakin and Wilkinson have demonstrated how this legal model plays a similarly crucial role in the economics of labour market regulation: employment law is the trade-off site between the benefits of control imposed on employees, and the cost of protective obligations imposed on employers in return. Individual instances of managerial control are attributed to the employer’s (legal) personality in order to ensure accountability and facilitate enforcement.
A vast literature on ‘atypical work’ has explored the problematic implications of this approach in work arrangements which deviate from the received paradigm of stable, open-ended employment for a single employer. Examples include the ‘fissuring workplace,’ where employer control is exercised by multiple parties through outsourcing agreements, the use of temporary agency work, or complex corporate groups; and false self-employment, where employer control is contractually denied through the fiction of independent contractor status. Once the reality of control is thus camouflaged, so-called ‘atypical’ or ‘non-standard’ workers may no longer enjoy access even to basic protective norms such as minimum wages or discrimination law.
Crucially, however, the mechanisms which hide the reality of employer control in ‘non-standard work’ are fundamentally legal ones: from the use of corporate personality (e.g. in the incorporation of subsidiary agency companies) to contract law (e.g. in inserting independent contractor or self-employment clauses in traditional employment contracts), the problem is one of ‘“armies of lawyers” contriving documents … which simply misrepresent the true rights and obligations on both sides,’ as Employment Tribunals have repeatedly highlighted.
In principle, at least, this makes it relatively straightforward to respond to evasion: existing legal mechanisms create the difficulty in ascribing responsibility to the controlling employer, and existing legal mechanisms can be relied on to restore it. Doctrines such as sham contracting or the primacy of facts allow courts to look through self-employment clauses and focus on the reality of employer control; and the corporate veil may be pierced to combat fraudulent abuse by controlling parent entities.
The challenge arising from the advent of people analytics, on the other hand, is radically different: algorithmic management does not rely on legal mechanisms to obfuscate control in order to evade responsibility – rather, diffuse and potentially inexplicable control mechanisms are inherent in the use of increasingly sophisticated rating systems and algorithms.
The rise of algorithmic management is slowly but definitely becoming a focal point of academic analysis and broader policy debates surrounding the future of work. The patterns of discourse are reminiscent of those surrounding the early days of what was then frequently referred to as the ‘sharing economy’. Once more, we are faced with starkly conflicting messages, juxtaposing the promise of the future of work with dire predictions of (algorithmically perfected) exploitation. In reality, of course, there are elements of truth in both accounts: we should be very weary of easy regulatory solutions proposed by proponents on either side, whether it’s complete deregulation on the one side, or a luddite phantasy of smashing technology on the other. The onset of the COVID-19 pandemic, and the ensuing speeding up in the roll-out of algorithmic management tools will cast these issues into yet starker relief.
The real challenge lies in harnessing the unequivocal potential in the trends which will shape tomorrow’s work, whilst ensuring that no one is left behind in enjoying decent and sustainable working conditions. More fundamentally, this requires that we avoid falling into the trap of (technological) determinism: none of the trends identified in this paper come as the result of some inexorable logic. Historical evidence strongly suggests that technological progress makes work easier, safer, and more productive. At the same time, however, it opens up the possibilities of abuse, from privacy-invading surveillance to precarious, highly pressured work.
What is essential, then, is a real sense of agency, of the power and the path-dependence of regulatory choices. Where our efforts are focused depends on legal and economic incentives, which ultimately determine whether technology is deployed in support of decent work – or whether it presents a real threat to it. As the Black Box Boss comes home, responsible regulation will be more important – and difficult – than ever.
* Professor of Law, University of Oxford. I acknowledge funding from the Economic and Social Research Council, Grant No ES/S010424/1, and prior work forthcoming in the Comparative Labor Law and Policy Journal, on which the present contribution builds. The usual disclaimers apply.
 Autor, D. H. (2001), ‘Wiring the Labor Market’, Journal of Economic Perspectives, 15(1), 25–40
 Ben Waber, People Analytics (FT Press, Pearson 2013) 178.
 Waber, People Analytics, supra note 2, 191.
 Tom Slee, What’s Yours is Mine: Against the Sharing Economy (O/R Books 2015).
 Ibid. This is confirmed by internal Uber documents, which suggest that, in 2014, fewer than 3 per cent of drivers were ‘at risk of being deactivated’ as a result of a rating below 4.6 stars (out of 5): James Cook, ‘Uber’s internal charts show how its driver-rating system actually works’, Business Insider UK (11 February 2015), http://uk.businessinsider.com/leaked-charts-show-how-ubers-driver-rating-system-works-2015–2, archived at https://perma.cc/5UPM-SWFN. It might be argued that this is a result of the pressure of the rating system keeping the worker pool at a high standard, with lower performing bands excluded from the market. As Slee explains, however, this is not the case: ‘J-curve rating distributions [where nearly all data points are at the high end of the scale], like those of the Sharing Economy reputation systems, show up whenever people rate each other’ (Slee, What’s Yours is Mine (n 5) 101).
 In previous work, I have defined a ‘function’ of being an employer as one of the various actions employers are entitled or obliged to take as part of the bundle of rights and duties falling within the scope of the open-ended contract of service: J Prassl, The Concept of the Employer (OUP 2015) 24–25. In trawling the established tests of employment status such as control, economic dependence, or mutuality of obligation for these employer functions, there are endless possible mutations of different fact scenarios, rendering categorisation purely on the basis of past decisions of limited assistance. The result of this analysis of concepts underlying different fact patterns, rather than the actual results on a case-by-case basis, is the following set of functions, with the presence or absence of individual factors becoming less relevant than the specific role they play in any given context – the ‘equipollency principle’ (Äquivalenzprinzip): L Nogler, ‘Die Typologisch-Funktionale Methode am Beispiel des Arbeitnehmerbegriffs’ (2009) 10 ZESAR 459, 463. Whilst this analysis was developed primarily on the basis of Common Law jurisdictions, subsequent work suggests that the approach is capable of being similarly developed in Civilian jurisdictions: see e.g. J Prassl and M Risak, ‘Uber, TaskRabbit, and Co.: Platforms as Employers? Rethinking the Legal Analysis of Crowdwork’ (2016) 37 Comparative Labor Law and Policy Journal 619-651.
 See e.g. E Ales, Y Curzi, T Fabbri, O Rymkevich, I Senatori and G Solinas (eds), Working in Digital and Smart Organizations: Legal, Economic and Organizational Perspectives on the Digitalization of Labour Relations (Palgrave McMillan, 2018).
 G Neff and D Nafus, Self-Tracking (MIT Press Essential Knowledge series, 2016).
 Some of the classic early citations include J McCarthy, M Minsky, N Rochester and C Shannon, A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (1955); and A Turing, ‘Computing Machinery and Intelligence’ (1950) 49 Mind 433.
 For an illustration in the employment context, see e.g. the UK Government’s employment status tool, https://www.gov.uk/guidance/check-employment-status-for-tax
 N Polson and J Scott, AIQ: How Artificial Intelligence Works and How We Can Harness its Power for a Better World (Bantam Press, 2018).
 D Heaven (ed), Machines That Think (New Scientist, 2017).
 I Goodfellow, Y Bengio, A Courville, Deep Learning (The MIT Press, 2016).
 In jurisdictions covered by the European Unions’ General Data Protection Regulation, such an approach would not be legal, given a right to have a ‘human in the loop,’ i.e. not to be subject to fully automated decisions: see GDPR, Art 22.
 The Verge (legal documents as linked in article) supra note 11.
 Alex Rosenblat and Luke Stark, ‘Algorithmic labor and information asymmetries: a case study of Uber’s drivers’ (2016) 10 International Journal of Communication 3758, 3761, 3762, 3766.
 Doug H, ‘Fired from Uber: why drivers get deactivated, and how to get reactivated,’ Ride Sharing Driver (21 April 2016), http://www.ridesharingdriver.com/fired-uber-drivers-get-deactivated-and-reactivated/, archived at https://perma.cc/3MQL-4TWD; Kari Paul, ‘The new system Uber is implementing at airports has some drivers worried,’ Motherboard (13 April 2015), http://motherboard.vice.com/read/the-new-system-uber-is-implementing-at-airports-has-some-drivers-worried, archived at https://perma.cc/CV8P-EM7U; ‘10 minute timeout,’ Uber People (1 March 2016), http://uberpeople.net/threads/10-minute-timeout.64032/, archived at https://perma.cc/AS3C-94EP. As part of a recent settlement in the United States, drivers there now enjoy marginally more clarity, even though temporary deactivation for low acceptance rates is still explicitly mentioned: Uber, ‘Uber community guidelines,’ http://www.uber.com/legal/deactivation-policy/us/, archived at https://perma.cc/8MR4-GFDL. In other cities, temporary deactivation has been replaced by a simple logout.
 Citing Michel Foucault, Discipline and Punish: The Birth of the Prison (ed. Alan Sheridan, Vintage Books 1979), 201.
 B Waas and G Heerma van Vosss (eds), Restatement of Labour Law in Europe: Volume I (Hart 2017).
 S Deakin and F Wilkinson, The Law of the Labour Market: Industrialization, Employment, and Legal Evolution (OUP 2005) 15, 86-7.
 P Davies and M Freedland, ‘The Complexities of the Employing Enterprise’ in G Davidov and B Langile (eds), Boundaries and Frontiers of Labour Law (Hart 2006).
 For an overview, see eg E Albin and J Prassl, ‘Fragmenting Work, Fragmented Regulation: The Contract of Employment as a Driver of Social Exclusion’ in M Freedland et al (eds), The Contract of Employment (OUP 2016) 209.
 D Weil, The Fissured Workplace: Why Work Became so Bad for so Many and What Can Be Done to Improve it (Harvard University Press 2014).
 See e.g. Autoclenz Limited v Belcher and ors  UKSC 41; A Bogg, ‘Sham self-employment in the Supreme Court’ (2012) 41 Industrial Law Journal 328.
 International Labour Organisation (ILO), Non-Standard Employment around the World: Understanding Challenges, Shaping Prospect (Geneva 2016).
 H Collins, ‘Independent Contractors and the Challenge of Vertical Disintegration to Employment Protection Laws’ (1990) 10 Oxford Journal of Legal Studies 353.
 ILO, Regulating the Employment Relationship in Europe: A Guide to Recommendation No 198 (Geneva 2013) 33.
 Aslam and Farrar v Uber, Case No. ET/2202550/2015, at para  (London Employment Tribunal, Judge Snelson).
 The reality of litigation and enforcement will of course be significantly more complex: J Prassl, The Concept of the Employer (OUP 2015) ch 5, ch 6.