This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minutes read

AI contracting – what do you need to know?

In the first of our ‘key takeaways’ posts from our recent Digital Forum conference, we've provided a summary of the AI contracting panel session.

During the conference, our experts provided helpful tips on what to look out for when commercialising AI, particularly in terms of contractual risk in development agreements and licences. Taylor Wessing partners Chris Jeffery (London) and Otto Sleeking (Amsterdam) were joined by Genevieve Perez of Sheppard Mullin who provided a US perspective. 

The panel highlighted key factors to address in development contracts, including:

  • IP – address ownership of AI models and output and usage rights.
  • Development and testing - describe development methodology, frameworks for technologies to be used, testing and acceptance procedures and criteria.
  • Data management – specify training data sources, online management of the AI model and retraining with new data.
  • Algorithm transparency – include a requirement to provide explanation for outputs.
  • Model performance, accuracy and scalability – define performance benchmarks and validation procedures and address how the AI system will scale and adapt.
  • Regulatory compliance – consider the emerging regulatory framework including the EU's AI Act and embed compliance in the contract where required.

Key factors to address in service and licence agreements included:

  • Service description – clearly describe the system's functionalities, technologies involved and how personal and company information will be used.
  • Scope – define use rights and limitations.
  • IP – address ownership of input data and outputs.
  • Data – specify types and sources of data, any restrictions on use, and ownership. Ensure compliance with personal data and cyber security regulation.
  • Bias - address issues of bias and fairness including through due diligence and by providing audit rights.
  • Regulatory compliance – the contract may cover a range of regulatory areas – IP, data as well as AI-specific regulation so these need to be covered off. 

Attendees were reminded that when contracting to use externally developed AI, even where customised, there is unlikely to be much room for negotiation so the procurement process will be focused on internal risk assessment. Questions businesses need to answer for a generative AI product will include:

  • Are we getting the rights we need?
  • Are we giving too wide a licence back, eg to our data or outputs? Can we restrict what the supplier does with our data?
  • Are we protected against third party infringement claims? Many developers/suppliers now provide indemnities around how models were trained and on the ownership of the output. For customers (and pending the outcome of ongoing legal cases), it's likely to be the output that's the most important issue. 

Finally, the panel also discussed how to cut through the bewildering array of legal issues in practical terms, highlighting the following:

  • Ensure multi-disciplinary teams - not just lawyers but people from the business units where the AI tool will be used. Understand the practical impact if the tool doesn't work well.
  • Build vendor management and supply chain risk issues into your processes.
  • Change management procedures – don't fall into the trap of letting your business think you've approved the whole tool rather than particular use cases. Put in place ongoing audit and use monitoring to ensure the AI is not used for new purposes which change the risk profile.
  • Upskill your teams – you need to be able to explain both the technical and legal considerations in layman's terms. You need someone on each of the legal and compliance and engineering teams who understands AI.

 

Tags

technology media & communications, information technology, artificial intelligence & machine learning, ai