Navigating the Ethics of Agentic Artificial Intelligence


Introduction

Legal AI is crossing a threshold from tool to actor. The next generation of systems will not simply answer a prompt, draft a memo, or summarize a record. It will move through a workflow: reading documents, making selections, updating files, preparing communications, and teeing up decisions before a lawyer ever sees the final product. That shift creates enormous leverage for legal teams, but it also changes the ethical risk. When AI can act, supervision cannot be limited to checking what it wrote. Lawyers must also control what it is allowed to do.

The Agentic Shift

Three years ago, the dominant risk in legal AI was hallucination. Remember Mata v. Avianca, when AI generated six entirely fake judicial opinions, complete with fabricated quotes and internal citations? This is like an architect who designs a beautiful home with photorealistic blueprints but never actually checks the building codes or calculates the structural soundness of the home.

By 2026, the technology and the duty of supervision that goes with it have changed materially. We have moved past generative AI (systems that produced text on request) to agentic AI: in effect, a digital associate that can take action. Agentic AI acts as the foreman, not the architect. It takes instructions and finds a way to complete the given tasks. But this does not come without its own potential risks. Told to pour the foundation by Friday, the AI foreman discovers the concrete truck cannot arrive until Monday and, without checking back, hires a new crew at a fifty-percent premium to hold the deadline. The blueprints and instructions were sound, but the autonomy was not bounded. That is the supervision gap.

Legacy AI policies, designed to catch hallucinations, are no longer sufficient to govern a system that can take a dozen consequential actions (sending email, populating filings, modifying records) between status calls. To survive this shift, lawyers must move from being in-the-loop (exhausted by reactive checking) to being at-the-helm. This is the new ethical mandate.

The Non-Delegable Duty

Under ABA Model Rules 5.1 and 5.3, a lawyer’s duty to supervise is absolute. Whether the assistant is a first-year associate or an AI agent, accountability runs to the licensed lawyer who deployed it.

Comment 8 to Rule 1.1 has required lawyers to keep abreast of “the benefits and risks associated with relevant technology” since 2012. With agentic tools, that obligation no longer ends at knowing how to log in. Competence now requires the lawyer to understand the reasoning behind the output well enough to identify when it is wrong. If an agentic system decides to exclude a key document from a production set because it misapplied a privilege filter, “the algorithm made a mistake” is not a valid defense before a judge.

For an agentic workflow to be ethical, it must be grounded in truth. Defensible agentic AI is grounded in the firm’s own matter files, work product, and verified case law, not the open web. AI must have access to past filings, client-specific data, and verified case law to be effective and accurate. An agent is only as ethical as the data, and quality of that data, it can access.

Moving to the Helm: Decision Tracing and Reasoning Logs

Being at-the-helm means shifting from a defensive posture to a proactive one. In-the-loop puts the lawyer downstream of the agent, reviewing output after the work has been done. But at-the-helm means setting the navigational parameters before the agent even starts.

Setting those parameters, and proving the agent stayed inside them, depends on decision tracing. ABA Formal Opinion 512 (July 2024) made the point directly: a lawyer who cannot explain how a generative AI tool reached its output cannot meaningfully supervise it. With agentic systems, that conclusion only sharpens: a black-box tool is one a lawyer cannot competently use, because the lawyer cannot competently supervise it. Professional-grade agentic systems now provide reasoning logs: a step-by-step paper trail of every autonomous action taken.

This is a strategic advantage. When a client or court asks how a strategy was reached, the lawyer can point to a logged chain of reasoning that was set, reviewed, and verified. This transparency fulfills ethical obligations and builds a human fortress around a firm’s reputation.

The Traffic-Light Framework: A Practical Roadmap

Navigating the ethics of agentic AI requires a tiered approach to risk. Not every task warrants the same degree of autonomy. 

Green Light: Administrative Velocity

These are the tasks where agents can run. Intake routing, document organization, and basic scheduling are low-stakes administrative chores.

  • The Workflow: An agent sees an incoming email, identifies it as a new lead, creates a folder in the DMS, and flags it for a conflicts check.
  • The Outcome: We reduce administrative drag without risking a malpractice suit. The ethical stake is low because no legal judgment is being exercised.

Yellow Light: Collaborative Intelligence

This is the heart of the modern firm. These are tasks like drafting research memos, medical chronologies, or demand letters. The general rule here is AI proposes and human disposes.

  • The Workflow: The agent performs a cross-document analysis of 10,000 pages of medical records and drafts a chronology. However, before that chronology is finalized, the attorney uses reasoning logs to verify the fact extraction.
  • The Safeguard: Every claim made by the agent must be hyperlinked back to the source document in the firm’s system of record. If there is no click to verify, don’t use the output.

Red Light: The Human Fortress

There are no-go zones where the agent is strictly prohibited from taking autonomous action.

  • The Workflow: Final settlement negotiations, “bet the company” matters, appearing in court (even via AI-generated scripts), or making the final call on a case-ending strategy.
  • These decisions, the moral and legal weight of a person’s liberty or a company’s future, cannot be reduced to a probability set. Preserving public trust in the legal profession requires that the final decision always rests with a human being who can be held accountable. This will be the case for the foreseeable future. 

chart 1

Implementation: Building for the Future

An AI policy that addresses only generative tools is already incomplete. Governance must anticipate agentic action: which tasks an agent may initiate, which require attorney sign-off before execution, and what records must be preserved when it acts. New associates should be trained to audit agent reasoning with the same rigor they apply to a junior colleague’s memo.

Firms that operate this way will carry larger caseloads with fewer errors and produce work that survives scrutiny from clients, opposing counsel, and the bench. That, more than any productivity claim, is the competitive edge.

Conclusion: Supervision Is the Practice

Used well, agentic AI lets lawyers spend less time on procedural load and more on strategy and advocacy. But the technology does not change the basic deal: a human lawyer holds the license, and a human lawyer answers for the work. The firms that build their workflows around that fact, with defined scope, decision traces, and a human sign-off on anything consequential, are the firms that clients will trust to use these tools without giving up the judgment for which they hired a lawyer.

Create An Agentic AI AUP: 5 Simple Steps

To build a manual for a firm’s AI agents, here is the checklist for a 2026-compliant   
Acceptable Use Policy (AUP):

1. Define Agentic Scope

Clearly list which software tools are agentic (able to take multi-step actions) versus generative (static output). Assign a risk level (Green, Yellow, Red) to every approved task within those tools.

2. Mandate the Review & Release Gate

Establish a hard rule: No AI agent may send an email to a client, opposing counsel, or the court without a human release click. This protects the firm from being bound by representations the agent was never authorized to make.

3. Establish Data Grounding Rules

Explicitly forbid the use of public or open AI for any task involving PII (Personally Identifiable Information) or other regulated or sensitive information. Require all agentic work to be done within a private VPC (Virtual Private Cloud) or an enterprise-grade environment that does not train on firm data.

4. Create an Audit Trail Requirement

Require associates to save the reasoning log or thought trace of high-stakes AI tasks into the client’s permanent file. If the work is ever questioned, the firm must be able to prove how the AI reached its conclusion.

5. Standardize Billing Descriptions

Provide clear templates for how to bill for AI-assisted work.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *