PRINT
|
SUBSCRIBE | UNSUBSCRIBE
|
TEXT ONLY
BY Ridwaan Boda , Alexander Powell AND Aobakwe Motebe
Rethinking governance in the age of AI: A legal and operational imperative for organisations
Artificial Intelligence (“AI”) is no longer a theoretical buzzword; it has rapidly morphed into an integral part of many organisations’ operations, production, and supply chains. While organisations have embraced AI tools for productivity and efficiency, the reality is that internal governance and legal frameworks have lagged behind. Many organisations do not have an internal governance function dedicated to governing and regulating the approval and use of AI systems within the organisation. A failure to govern internal staff use of AI systems can lead to a situation known as shadow AI, where an organisation has no oversight, control, and management over (i) which AI systems are used; (ii) how these AI systems are used internally; and (iii) which data and information is shared with such AI systems. Failure to implement proper governance processes could expose the organisation to both reputational and legal risks.
Many existing organisational contracts and policies were drafted before AI became widespread, and in today’s landscape, this anachronism presents growing legal and operational risks for organisations that cannot be overlooked or left to chance.
Outdated internal policies: A blind spot in AI governance
Internal policies that were drafted and implemented prior to the AI ‘boom’ often fail to address the unique challenges posed by modern AI technologies, creating significant internal governance risks. Organisations must ensure that they continuously review and amend their internal policies to remain up to date with evolving AI standards, best practices, ethical and other guidelines and frameworks. Corporate governance frameworks, including IT policies, acceptable use policies, data protection guidelines, and cybersecurity protocols, which were developed without AI in mind, often fail to address, inter alia:
- the approval, deployment and use of generative AI tools by employees;
- the risk posed by data and (personal) information inputted into AI systems and the risk of leaking confidential, proprietary, and personal information to AI models;
- use of AI-generated outputs in business-critical functions; and
- liability for decisions influenced by AI recommendations or outputs.
Why organisations need a bespoke AI Policy
A one-size-fits-all approach to AI governance is no longer sufficient. Each organisation operates within a unique context defined by its industry, data practices, risk tolerance, and ethical values. A bespoke AI Policy ensures that AI systems are aligned with specific business goals, regulatory requirements, and stakeholder expectations. It also empowers teams to manage AI responsibly, fostering innovation while minimising unintended consequences and risks. Tailored policies are not just a safeguard but serve as a strategic asset in the age of AI systems. AI governance policies should not replace existing governance documents but cross-reference and supplement them, integrating with acceptable use, cybersecurity, procurement, and other relevant policies. It is also important to ensure that existing policies are amended to reference the AI governance policy.
Reviewing legacy contracts: MSAs and AI risk allocation
Many Master Services Agreements (“MSAs”), procurement contracts, and technology service agreements in use today were negotiated before the rise of AI and do not contain contractual provisions addressing the key risks introduced by the use of AI systems. This creates a material gap in how legal risk is allocated when service providers integrate AI into their service offerings (often without disclosure to their customers).
Some of the additional key risks that legacy contracts need to be updated with include:
- Hallucination: AI-generated outputs can be incorrect, misleading, or biased. Customers must ensure that their contracts with service providers include contractual provisions requiring service providers to, inter alia, (i) notify them of their use of AI systems; and (ii) implement human-in-the-loop reviews to fact-check and vet any outputs generated by AI systems.
- Automated decision-making: Where services involve algorithmic decisions (for example, in HR, credit scoring, or fraud detection), organisations must comply with laws governing automated processing, such as the Protection of Personal Information Act, 2013.
- Undisclosed use of AI to cut costs: Some service providers may quietly adopt AI tools to automate service delivery, reducing internal costs significantly, but continue to invoice clients at legacy rates premised on manual work or human expertise, resulting in clients overpaying for AI-delivered services.
- Bias and discrimination risks: If a service provider’s AI systems are trained on flawed or biased data, outputs could perpetuate discrimination or inequity, especially in sectors such as recruitment, insurance, or lending. This could expose the client to reputational damage or legal liability.
Reviewing contractual templates: Are you contracting in the AI age?
Standard templates, including non-disclosure agreements, Master Services Agreements, software development agreements, Software-as-a-Service contracts, and even employment contracts, should be updated to reflect:
- Approval of use of specific AI tools;
- Ownership of AI-generated content and outputs;
- Data usage rights, especially where client data is inputted into service provider AI systems;
- Limitation of liability, limiting the customer’s liability;
- Service provider indemnities and warranties, addressing, inter alia, compliance with applicable laws, human-in-the-loop reviews, and bias identification and mitigation; and
- Service provider's obligation to disclose AI use in delivering deliverables or services.
Organisations should audit existing contractual templates to ensure alignment with business expectations for the use and deployment of AI systems, whilst mitigating the risks introduced by such AI systems.
Are customers aware of what they are paying for?
Many organisations may not realise that their service providers are already using AI and incorporating outputs generated by such AI systems into deliverables and services. AI-driven solutions can significantly reduce operational costs, but where these efficiencies are not passed on to the customer, the customer may be overcharged for services without realising the value of AI in the services. As such, customers should be aware of any service provider's use of AI systems in the rendering of services, which should be governed by appropriate contractual terms in the relevant service provider agreements. This is another critical reason why legacy contracts need to be updated.
AI will not wait for your policies and contracts to catch up
AI adoption is moving rapidly, far faster than internal governance structures can manage it. To stay legally compliant, competitive, and in control, organisations must ensure that they (i) review and amend their existing policies and implement an AI Governance Policy; and (ii) review and amend their service provider agreements to include AI-specific clauses. A proactive legal strategy in the AI space is no longer a nice-to-have; it is a competitive and regulatory necessity.
For more information or advice on AI governance, please contact our experts:
Ridwaan Boda
Executive | Technology, Media and Telecommunications
Alexander Powell
Associate | Technology, Media and Telecommunications
Aobakwe Motebe
Candidate Legal Practitioner | Technology, Media and Telecommunications