Some people think their managers lack intelligence, but what if that ‘manager’ is artificially intelligent? What if the manager is an app? A recent study shows that AI may be ready to fill management roles in certain situations, but the jury is still out on how fair this could be for employees.
The study, conducted by Lindsey Cameron, a professor at the Wharton School of the University of Pennsylvania, looked at an existing example of workers supervised by an AI-powered manager: taxi drivers who responded to apps like Uber or Lyft. In addition to scheduling and payments, algorithmic management can include a range of managerial tasks, Cameron noted in a related interview published by Wharton – “anything that has to do with hiring, firing, evaluating or disciplining employees.”
While mechanized management may seem inhumane and lacking in empathy, it works well for some roles. Hail drivers for example, for the most part actually enjoy work with their AI-powered apps.
Unlike human managers, there is constant communication between AI-powered apps and employees. “During a normal shift, a hail driver may only make a dozen trips, but he will have more than a hundred unique interactions with the algorithm,” she said.
AI-powered managers also offer the flexibility and responsiveness seen in gig work. “Surprisingly, many employees report finding comfort and choice while working under algorithmic management,” Cameron wrote in her study. “If you talk to most people who do ride-sharing or other app-based work, most like it or at least think it’s better than their alternatives,” she added in the Wharton interview.
Cameron’s findings are based on a seven-year qualitative study of taxi drivers, who have been controlled by algorithms the entire time. She found that these employees use two types of tactics on the job. “With engagement tactics, individuals generally follow the algorithmic nudges and do not attempt to circumvent the system. Deviance tactics involve individuals manipulating their input into the algorithmic management system.”
Involvement and deviance tactics “both lead to acquiescence or active, enthusiastic participation by employees to align their efforts with management’s interests, and both contribute to employees seeing themselves as capable agents,” she noted in the research. “However, this choice-based consent can mask the more structurally problematic elements of the job,” she warned, calling this a “good-bad-job” scenario.
For example, in algorithmically managed warehouse work, “employees are often pushed to their limits” without the empathy of a human manager. “How do you reason with an algorithm?” Cameron asked.
“Think of Amazon’s warehouse workers or the person at the checkout at your grocery store,” she said. “There’s probably an algorithm that counts how quickly they scan items and evaluates their performance. Think about the emails and text messages you receive asking you to rate an employee you’ve interacted with. And let’s not forget that we are now asked to tip after every service transaction: you can be assured that information is recorded and used as a performance indicator.”
The rise of algorithmic managers also extends far beyond manual tasks. “Algorithms are becoming embedded in work across professions, sectors, skill levels and income levels,” Cameron said.
White-collar or professional workers are also increasingly subject to algorithmic management. “We see a wide range of new tools, technology and digitalization under the future of work,” Cameron said. Look no further than the surveillance of remote workers that took place during the COVID-19 period, “with the introduction of tools that could track your keystrokes and whether you were active on your computer or Bloomberg terminal. When you perform customer-facing tasks, an algorithm tracks your ratings and reviews. There are algorithms that scan your email to make sure you’re not committing corporate espionage or telling offensive jokes.”
While the advance of AI into management roles is inevitable, Cameron urged human oversight of all AI-driven actions. Importantly, as noted in her research, employee consent is required. “Choice-based consent highlights the importance of constant, even if limited, choice as a mechanism that keeps workers engaged, especially in jobs that are considered low-quality,” she wrote.
‘You have to have someone informed. You can’t have hard and fast evaluation limits. In some companies, an algorithm can fire you if you don’t meet your quota. Not only should that not happen, but there should also be an appeals process when decisions are made.”
“Don’t let the algorithm be stupid in principle,” she urged.