How is generative AI reshaping South African workplaces, and what legal challenges do employers and employees face in this evolving landscape?
Image: IOL / Ron AI
The rapid adoption of generative AI technologies like ChatGPT has transformed South African workplaces since 2022, creating both unprecedented opportunities and complex legal challenges. This technological shift has prompted important conversations across various institutional settings, with educational institutions and the judiciary leading discussions on how to mainstream generative AI while maintaining ethical standards.
However, the workplace context has received notably less attention in these discussions. This oversight becomes particularly concerning when considering the fundamental question of employee liability for using generative AI tools especially in environments where explicit workplace policies remain absent and AI-specific legislation has yet to emerge. This gap leaves employers and employees navigating uncharted territory, where the absence of specific guidance may force reliance on existing frameworks primarily the Protection of Personal Information Act (POPIA) and the Labour Relations Act (LRA) to interpret the legal implications of generative AI use in professional settings.
POPIA has found unexpected relevance in the age of generative AI. While lawmakers could hardly have anticipated ChatGPT and similar tools, POPIA's principles may apply directly when employees use AI systems to process personal or confidential information about clients, colleagues, or third parties.
Section 4's requirements that processing be lawful, reasonable, and respectful of data subject rights take on particular significance in the AI context. These conditions typically demand informed consent or another lawful basis such as contractual necessity or legitimate interest. The problem arises when employees input personal data into AI tools without proper authorisation, or due regard for processing limitations or rights of data subjects thus creating what POPIA unambiguously defines as unlawful processing.
The individual employee's obligation intersects with broader institutional duties under Section 19, which requires organisations to implement appropriate security measures in securing the integrity and confidentiality of personal information. This creates a web of shared responsibility: employees who circumvent established protocols whether through careless data uploading or deliberate workarounds expose both themselves and their employers to liability for data breaches and privacy violations.
The practical consequence proves straightforward yet sobering. Unauthorised or negligent AI use that results in personal data exposure constitutes unlawful processing regardless of whether explicit AI policies exist. Employees may face potential disciplinary action and legal consequences, while their employers could confront concurrent regulatory penalties and civil liability a dual exposure that many organisations have yet to fully grasp.
From a labour law perspective, the absence of AI-specific legislation creates no legal safe harbour for employees. Fundamental obligations under employment contracts and workplace policies remain intact, with the LRA providing employers established mechanisms to potentially utilise in addressing AI-related “misconduct.”
Employment contracts commonly include confidentiality clauses that prohibit unauthorised disclosure of proprietary or sensitive information. Provisions that are seemingly adaptable to the AI era (if applied with caution). If an employee uses generative AI tools in a manner that transfers confidential data to unauthorised platforms or persons, this behaviour may breach contractual duties. Employers may then invoke disciplinary procedures under the LRA to address such misconduct, potentially leading to warnings, suspensions, or dismissal, depending on the severity.
The scope of accountability extends beyond confidentiality to encompass intellectual property rights and workplace conduct standards. Misappropriation of IP through AI-generated content or failure to adhere to ethical guidelines regarding AI use provides legitimate grounds for employer action, even without explicit AI references in existing policies. However, employers may face challenges enforcing liability if no AI-specific rules exist and if employees claim ignorance of expectations.
The regulatory fog that has surrounded AI in South Africa is gradually lifting. The National AI Policy Framework, introduced in 2024, represents the government's first serious attempt to grapple with AI's dual nature its extraordinary promise alongside genuine risks that cannot be ignored. At its heart, the framework insists on human oversight, ensuring that AI systems augment rather than replace human decision-making. It also encompasses broader measures of accountability and transparency in AI system operations, including proactive efforts to identify and mitigate algorithmic bias, as well as design principles that ensure AI outputs are understandable and interpretable by users.
The framework's most pragmatic promise lies in connecting AI governance to existing legal structures, particularly POPIA, rather than creating parallel regulatory systems. The policy also acknowledges an uncomfortable truth about South Africa's AI future: success depends heavily on developing local talent and expertise. Without substantial investment in the latter, the country risks becoming merely a consumer rather than a participant in the global AI economy.
This framework shifts from a purely technical approach to insistence that AI applications embody fundamental South African constitutional values. Perhaps more pointedly, this declaratory position affirms that South Africa will not simply adopt AI technologies as they emerge, but will assume responsibility to shape implementation within the broader South African context.
South African organisations face a pressing dilemma: while lawmakers struggle to keep pace with AI's rapid evolution, businesses cannot simply wait for regulatory clarity. The starting point being establishing acceptable AI use within the workplace. Rather than imposing blanket restrictions, effective governance distinguishes between applications that genuinely enhance productivity and those that introduce unacceptable legal or operational risks.
Data protection presents the most immediate concern. POPIA's existing requirements make the casual uploading of sensitive information to public AI platforms a significant compliance hazard. Organisations that implement explicit authorisation protocols before any confidential data reaches AI systems not only avoid potential legal challenges but also demonstrate responsible stewardship of stakeholder information.
Employee education proves equally crucial in this environment. Staff who understand the intersection between AI capabilities and legal obligations make fundamentally better decisions. Training that addresses privacy requirements, intellectual property implications, and the ways algorithmic bias can infiltrate business processes creates a workforce equipped to navigate uncertainty.
Importantly, workplaces also benefit from establishing clear channels for reporting AI-related concerns, whether these involve misuse, security incidents, or unexpected system behaviour. This approach must be supported by comprehensive compliance management frameworks and risk mitigation strategies to enable early detection signs or issues before they escalate.
* Sikhosonke Mayekiso is an attorney currently employed as a state law advisor at the Department of Justice and Constitutional Development.
** The views expressed do not necessarily reflect the views of IOL or Independent Media.
Related Topics: