Following my recent blog about AI and the GC, a further question came up about how to work with counterparties that use AI tools. The recent development of language learning models (LLMs) (artificial intelligence (AI) agents or bots) now supporting analytical work and negotiations in contracts is a challenging phenomenon. It’s not because they are always wrong – they aren’t – but because they are often just right enough that budget-conscious consumers of legal services can grow dependent upon them. Leaving aside the implications to those users for further discussion, I’m interested in exploring another challenge – how do business professionals with lawyers, or their lawyers themselves, respond to their counterparties’ use of the tools?
Sorry to say it, but the answer cannot be “don’t use the tools.” Pandora’s box is open. A more thoughtful approach is required:
So here are some tips:
- Be transparent – If someone presents you with something that looks like AI guidance, ask them if that’s the case. An honest conversation can come only if the parties have a common baseline of facts.
- Provide context – In fact this was useful before, when it was attorney-to-attorney or business person to business person; but AI dependency is real, and when you provide feedback to someone using AI tools, remember the limitation of AI is that the output is only as good as the prompts and the information it is trained on. You are unlikely to negotiate effectively with someone using AI by just stating an unreasoned position; but you can provide the context for your position. If properly prompted, the counterparty’s AI will learn from the context you provide.
- Manage expectations – When you know your counterparty is using AI tools to evaluate what you send them, consider what their likely responses might be. That might mean running it through AI yourself; or it might just mean thinking it through or discussing it with your attorney.
- Remember the goal line – the goal is not to out-think your counterparty or “beat their AI” tools. The goal should be objective-oriented. In the end, regardless of the tools used, both you and your counterparty need to come to a common understanding for a deal to be reached.
- Recognize that AI is likely to slow down the process – Recognize that AI may be just as likely to slow down the process as to speed it up, due partly to LLMs inherent ability to generate a large volume of text quickly. Has any LLM model ever given a one sentence response to anything, like “this looks good enough”? (Prompt dependent, of course). Perhaps merely recognizing that fact will allow you and the counterparty to come to a common understanding by (a) setting up calls to talk out issues, (b) establishing parameters on how much back and forth is acceptable, or (c) other ways to minimize the AI speedbump.
- Noting where AI may function better – Further to the last point, while two business professionals negotiating a contract would have to digest the volume of content AI can produce, two skilled attorneys using AI can assist those professionals by providing a more useful summary than the “TL;DR” version of AI jargon.
I’ve asked this question from the perspective of the person receiving the AI content for contract negotiations from their counterparty, but of course you may be the person generating that content. My tip there is “use a lawyer.” Maybe not exclusively, but be mindful of the attorney client privilege if you run legal advice through AI tools. A lawyer will serve your interests, cut to the chase hopefully faster than the AI tools, and help you to be objective-oriented in your negotiations. It is likely that your lawyers are using AI tools now, too, which can be helpful in some contexts; but the skills for supporting clients in contract negotiations are most often enhanced by the intelligence, empathy, and experience that AI still cannot model out.