BY AJ HESS
Many workers are afraid of artificial intelligence—and for good reason. Some experts say the technology has the power to impact, if not eliminate, workers’ daily tasks, jobs, and privacy. When Fast Company polled LinkedIn users about what they think is the most important workplace issue of 2024, 36% said that the ethical integration of AI was their top concern.
Harvard Law School’s Center for Labor and a Just Economy, or CLJE, recently released a report on the challenges facing workers in the age of AI and prefaced the report by defending that its findings do not “come from a fear of technology or its myriad uses.”
Instead, the report details the rise of “increasingly sophisticated tech-enabled management and production tools to track human activity” and provides actionable steps leaders can take to build a “proactive model of worker participation that future-proofs labour law, tech policy, and democracy.”
Here are the report’s recommendations for how to protect workers in the age of AI:
1.Mandate an AI Impact Monitor elected in every workplace where AI is being used to monitor, track, surveil, or assess workers. The goal of this would be to ensure that workers in every workplace have access to a knowledgeable person who can give accurate information about substantive AI safety issues and legal rights; help with worker reporting/whistleblowing on these issues; and become an information hub for workers, regulators, and the public.
2.Create sectoral commissions, consisting of representatives of labour and management, that would negotiate baseline AI safety standards for all firms in the sector. These baseline standards would be minimum standards across the sector and would be enforced through the operation of the impact monitors.
3.Mandate access to a human being when an algorithm makes a status-altering decision such as firing.
4.Ban employers from using AI to advocate against collective bargaining rights including a ban on employers from embedding messages about workers’ exercise of their collective bargaining or concerted activity rights in any AI driven interface that workers are required to use to accomplish work tasks.
5.Require meaningful transparency and access to information about the technologies being used to monitor, manage, and surveil workers.
6.Require companies to provide a safe, digital communications channel. Specifically, we recommend ensuring that whatever technology the employer uses to communicate with workers be made available to workers for their own organizing activities. In addition, we recommend that the law require employers to establish digital meeting spaces (i.e., private forums for online communications).
7.Appropriately classify workers so that gig workers can access their rights to redress grievances, organize as needed, and seek protection under current labour law and any AI-specific protections.
According to the report, the current most common ways in which companies are using AI include task monitoring, in which AI tracks employee activities and provides real-time updates; time tracking, in which AI monitors how much time workers spend on tasks and makes sure they adhere to deadlines; and employee surveillance, in which AI-powered tools monitor how employees communicate with one another.
As an example of these common applications of AI, the CLJE team points to Humanyze, a tech company that has equipped employees with microphones (that listen to workers’ conversations), Bluetooth and infrared sensors (that monitor where workers are), and accelerometers (which record when workers move through their vibrations).
Researchers also referenced Amazon in the report, mentioning that many of Amazon’s warehouse workers have their movements, task speed, and productivity tracked.