Navigating AI service contracts
Organisations are increasingly using and relying on the many commercial advantages of artificial intelligence to model risk, fabricate simulated data, create analytical reports and even generate computer code.
However, the use of AI services is not without risk. Predominantly provided on a “software as a service” basis, the increased risk of AI use is often directly proportional to the complexity of the AI solution offered.
Since a well-drafted contract remains the dominant risk management tool for procuring AI services, the complexity of AI service contracts will also increase with the complexity of the AI services being purchased.
The following are a few examples of AI service contracting issues to keep in mind to effectively manage those risks and to avoid commercial and legal pitfalls.
Many AI solutions are developed to provide a competitive commercial advantage over those who are without such capabilities. However, if open source software is used to develop the AI solution, there are potential downsides caused by the open source licences under which that AI solution was created.
First, the source code for the software may have to be “open” and viewable by anyone, which would negate any desired competitive commercial advantage.
Second, the licence may not include any indemnities for third-party infringement, thus possibly exposing the user to liability in the face of any such claims.
Third, the developer might have to offer any modifications or derivative works of the developed software to others under the same “viral” open source licence terms, including without compensation.
AI service customers should also ensure that the quality, reliability and expected service outputs are very well defined in the AI service agreement.
Failure to do so is one of the leading causes of litigation concerning all technology service transactions, and the need to contractually identify the service level specifications for AI solutions is no different.
A failure to contractually stipulate those outcomes may impede your right to claim that the AI service that you are paying for is not the same quality of solution that you were promised.
The risks associated with failing to operationally define the AI service are compounded by the fact that the quality of AI service can be difficult to verify and assess.
For that reason, AI service agreements routinely include acceptance testing provisions that allow the user to verify the quality of the AI’s functionality before the contract’s commercial and financial obligations take effect. That way, the agreed specifications about the AI’s deliverables can first be tested against the AI’s actual performance.
AI solutions may involve the wide collection of industry data, personal information and existing content (including intellectual property) from third parties. Therefore, AI service agreements should include robust indemnities to compensate the AI customer for any liability that may arise if the AI infringes any third party right, whether contractual, intellectual property, or privacy, related.
Whether the data that the AI service relies upon is supplied by the vendor or the customer, the parties should be very sure that there is an uninterrupted chain of legal right to use that data for the AI services.
Finally, the use of AI is now clearly on the radar of financial services regulators internationally. Consequently, the list of regulatory compliance obligations for IT systems, cybersecurity and related AI solutions is increasing.
On October 30, US president Joe Biden signed an “Executive Order on Safe, Secure and Trustworthy Development and Use of AI” and the European Union has introduced the Artificial Intelligence Act to govern AI solutions on the basis of relative risk, data quality and accountability.
Similarly, Canada’s proposed Artificial Intelligence and Data Act (part of Bill C-27) may even require formal risk and impact assessments to be undertaken before AI can be used.
In Bermuda, AI services may already be subject to regulatory scrutiny and proportional risk assessment, including with regard to cyber and IT risk governance, privacy law compliance, outsourcing transactions, the use of cloud services and the need to undertake diligent risk management assessments.
Therefore, all AI service contracts should include provisions that make AI service providers responsible for those compliance requirements, including the requirement to report cyber events, non-interference with regulator investigations or audits and the flow-down of the required security standards.
President Biden’s AI executive order states: “Harnessing AI for good and realising its myriad benefits requires mitigating its substantial risk.”
Most enterprises believe that the exceptional benefits of AI solutions far outweigh the additional diligence and effort that is necessary to manage the commercial, legal and regulatory risks of those solutions.
• Duncan Card is a partner at Appleby who specialises in IT and outsourcing contracts, privacy law and cybersecurity compliance in Bermuda. A copy of this column can be obtained at www.applebyglobal.com. This column should not be used as a substitute for professional legal advice. Before proceeding with any matters discussed here, persons are advised to consult a lawyer.
Need to
Know
2. Please respect the use of this community forum and its users.
3. Any poster that insults, threatens or verbally abuses another member, uses defamatory language, or deliberately disrupts discussions will be banned.
4. Users who violate the Terms of Service or any commenting rules will be banned.
5. Please stay on topic. "Trolling" to incite emotional responses and disrupt conversations will be deleted.
6. To understand further what is and isn't allowed and the actions we may take, please read our Terms of Service