AI, Privilege, and Personal Injury Litigation: What United States v. Heppner Could Mean for Canadian Cases
Artificial intelligence is rapidly becoming part of everyday legal practice and increasingly part of litigants’ everyday lives. A recent U.S. decision,
United States v. Heppner
, is an early signal that courts are beginning to grapple seriously with how generative AI tools affect longstanding legal doctrines such as the solicitor-client privilege and litigation confidentiality.
While
Heppner
is a U.S. criminal decision, the issues it raises are directly relevant to civil litigation in Canada, particularly in plaintiff-side personal injury litigation, where sensitive medical, employment, and financial information is routinely exchanged between parties.
Below, we explain what the decision says, why it matters, and what risks may arise in Canadian personal injury litigation as AI becomes more widely used by self-represented litigants, insurers, and even experts.
What Happened in United States v. Heppner?
In
United States v. Heppner
, a defendant used a publicly available generative AI tool to help develop legal arguments and defence strategies. He later claimed the AI-generated materials were protected by attorney-client privilege and litigation work product protections. The court disagreed. Judge Rakoff of the Southern District of New York concluded that communications with a consumer AI platform did not meet the requirements of privilege because the communications were not confidential and were effectively disclosed to a third party.
The
reasoning was straightforward
: attorney-client privilege requires confidential communications between lawyer and client for the purpose of obtaining legal advice. Where information is shared with an external platform that may store, process, or use the data, courts may find that confidentiality has been lost. Some
commentary
has noted that publicly accessible AI tools often reserve rights to collect and use user inputs, meaning users may not have a reasonable expectation of privacy when sharing sensitive information with those platforms.
The
takeaway
is not that AI can never be used in litigation. Rather, courts are likely to apply traditional confidentiality principles to new technologies. If privileged information is disclosed to a third party, privilege may be waived, even if that third party is an AI system.
Why This Matters for Canadian Personal Injury Litigation
Although Canadian courts have not yet addressed this precise issue, the underlying legal principles are familiar. Canadian law similarly requires that solicitor-client privilege depend on confidential communications made for the purpose of obtaining legal advice.
The broader lesson from
Heppner
is that courts are unlikely to create a special category of “AI privilege.” Instead, they will likely apply existing rules governing waiver, confidentiality, and disclosure. This creates several potential risks in a personal injury case.
Personal injury claims frequently involve highly sensitive documents, including:
medical records
employment files
income information
surveillance evidence
expert reports
discovery transcripts
If a party uploads these materials into a consumer AI platform, privilege or confidentiality protections may be compromised.
The Risk to the Deemed Undertaking Rule
One particularly important consideration in Ontario civil litigation is the deemed undertaking rule. Under
Rule 30.1
of the
Rules of Civil Procedure
, parties who receive documents in discovery are generally prohibited from using them for any purpose outside the litigation.
As one of our lawyers recently noted:
“AI-assisted self-represented defendants uploading our clients’ documents into AI platforms could potentially create a breach of the deemed undertaking rule.”
This concern is not theoretical. Increasingly, self-represented litigants are turning to AI tools to help them understand legal documents or generate submissions. If accident benefits files, clinical records, or expert reports are uploaded into AI systems that retain or process that data, there is a real question whether the deemed undertaking rule has been breached. Canadian courts have historically taken an approach to misuse of discovery documents, particularly where privacy interests are at stake.
Strategic Implications for Plaintiff Injury Lawyers
From a plaintiff perspective,
Heppner
highlights the importance of anticipating how AI tools may affect litigation strategy.
Some emerging considerations include:
1. Protecting Client Confidentiality
Clients may not appreciate that uploading documents into AI tools could expose sensitive information. Lawyers should consider proactively educating clients about appropriate and inappropriate uses of AI.
2. Monitoring Discovery Use
Where opposing parties are self-represented, courts may increasingly need to address whether uploading discovery documents into AI platforms violates the deemed undertaking rule.
3. Updating Litigation Protocols
Law firms may wish to develop internal policies governing AI use, including guidance on what information can safely be entered into AI systems.
4. Expert Evidence Risks
Experts may use AI tools for research or drafting. Counsel may need to consider whether this affects privilege or disclosure obligations relating to draft reports.
5. Privacy and Regulatory Consideration
Canadian privacy law may also become relevant where personal health information is uploaded to platforms hosted outside Canada.
AI Is Not Going Away, But the Law Is Catching Up
The most important lesson from
United States v. Heppner
is that new technology rarely creates entirely new legal principles. Instead, courts adapt existing doctrines to new factual contexts. Privilege depends on confidentiality. Confidentiality depends on control over information. When information is shared with AI tools that function as third parties, courts may conclude that privilege has been waived or confidentiality compromised. As AI becomes more common in litigation, we can expect Canadian courts to confront similar issues, particularly in areas like personal injury law, where privacy concerns are especially significant. At Bergeron Clifford Injury Lawyers, we continue to monitor developments in AI and the law to ensure our clients’ information remains protected and litigation strategies remain effective in an evolving technological landscape.
