Blog

Artificial Intelligence and the Practice of Law: 
Real‑World Implications for the Profession and the Justice System

Benjamin Reingold, Makda Yohannes and Stephen Burns
March 25, 2026
Abstract blue digital wave texture. Landing page, technology and future concept
Social Media
Download
Download
Read Mode
Subscribe
Summarize

Artificial intelligence tools are increasingly being used by members of the public to understand their rights and bring and defend lawsuits. Courts and regulators are thus confronting a difficult question: at what point does the public's use of AI result in the unauthorized practice of law, and what are the real‑world consequences when it does? Recent litigation in the United States highlights these risks and offers a potential path forward both of which are relevant to the Canadian legal landscape.

A Cautionary Tale: AI's Alleged Unauthorized Legal Practice

The risks of unregulated legal "assistance" were brought into sharp focus by a recent lawsuit filed in the US District Court for the Northern District of Illinois alleging that OpenAI engaged in the unauthorized practice of law. In that case, a US insurer alleges that a self‑represented litigant relied extensively on AI‑generated legal advice and drafting assistance to challenge a settled claim and inundate the court with meritless filings. According to the complaint, the litigant used AI‑generated arguments and documents to file dozens of motions that "serve no legitimate legal or procedural purpose", significantly increasing costs and burdening the judicial process.

While the lawsuit remains unresolved and arises under US law, its factual allegations illustrate how AI tools can amplify litigation misconduct.

The Risks of Practicing Law Using AI

There has yet to be a test case in Canada regarding allegations of unauthorized legal practice against an AI company. However, the use of generative AI to provide legal advice without appropriate professional oversight raises several interrelated risks.

First, the strain on an already overburdened justice system is increased. Courts already face limited resources and growing caseloads. AI‑assisted filings that are poorly grounded, procedurally defective, or legally incoherent can dramatically increase judicial workload. As illustrated by the Illinois case described above, unrepresented litigants that use AI may be able to generate high volumes of motions and pleadings with minimal effort, shifting the cost of sorting meritorious claims from meritless ones onto courts and opposing parties.

Second, generative AI systems are known to produce confident‑sounding but incorrect outputs, known colloquially as "hallucinations", including fabricated case law or misstatements of legal principles. Examples of case law hallucinations are manifesting themselves in many Canadian jurisdictions, including by experienced lawyers. AI‑driven "legal advice" increases the risk of harm to the integrity of the legal process.

Third, the unlicensed practice of law can result in significant penalties, including fines and findings of contempt, which may lead to potential imprisonment.

The Practical Upsides

While the Illinois case illustrates the potential costs of unregulated AI-driven legal assistance, responsibly deployed AI with proper oversight can also improve efficiency in routine tasks. For example, AI tools can assist with document organization, drafting, summarizing correspondence and evidence and preparing chronologies.

Used appropriately, these capabilities can shorten timelines and reduce cost, which may in turn support greater access to justice in a Court system that is currently marred by resource constraints. To realize these benefits, however, AI must be integrated with appropriate oversight, such as clear user guidance, confidentiality and data-security controls, and human review for accuracy and professional judgment. The value of AI in legal work is highest when it complements (rather than replaces) the role of counsel and the Court.

A US Model for Managing AI Risks in Legal Practice

At least one US state is beginning to respond to the unauthorized practice of law (UPL) concerns raised by AI-enabled legal tools by creating clearer pathways for innovation. Notably, the Colorado Office of Attorney Regulation Counsel has adopted a first-of-its-kind "non-prosecution policy" that, as a matter of enforcement discretion, generally deprioritizes UPL actions against developers of certain legal assistance technologies. The policy is intended to encourage responsible development of AI tools that may expand access to justice, particularly for self-represented litigants, while maintaining consumer safeguards. Among other guardrails, the policy contemplates lawyer oversight and requires disclosures that the developer and tool are not providing legal advice and are not a substitute for a licensed lawyer. Colorado has framed the approach as a time-limited pilot, with evaluation built in to assess its impacts and inform future regulatory choices. Colorado's policy approach provides a model of how legal regulators can address UPL risks while accommodating and encouraging AI innovation.

Conclusion: Proceeding with Care

AI has the potential to enhance efficiency and access to justice when used responsibly within a regulated framework. However, recent developments underscore that AI is not a substitute for licensure, professional judgment or ethical accountability.

For legal professionals and clients alike, the lesson is clear: AI may be a powerful tool, but when it crosses the line into unregulated legal advice, the consequences can extend beyond the individual user—affecting courts, opposing parties and public confidence in the justice system.

Social Media
Download
Download
Subscribe
Republishing Requests

For permission to republish this or any other publication, contact Bryan Canning at canningb@bennettjones.com.

For informational purposes only

This publication provides an overview of legal trends and updates for informational purposes only. For personalized legal advice, please contact the authors.

From the Same Authors

See All
Copyright in Compilations AI Generated Music and Playlists Under Canadian Law
Blog

Copyright in Compilations: AI‑Generated Music and Playlists Under Canadian Law

March 17, 2026
Benjamin K. ReingoldMakda YohannesStephen D. Burns
Benjamin K. Reingold, Makda Yohannes & Stephen D. Burns
When AI Speaks for Itself How AI is Reshaping Defamation Risk
Blog

When AI Speaks for Itself: How AI is Reshaping Defamation Risk

March 17, 2026
Benjamin K. ReingoldEmma DanaherStephen D. Burns
Benjamin K. Reingold, Emma Danaher & Stephen D. Burns

Latest Insights

See All Insights
Canadian Foreign Private Issuers Remain Exempt from US Insider Reporting Requirements
Blog

Canadian Foreign Private Issuers Remain Exempt from US Insider Reporting Requirements

March 25, 2026
Oliver LoxleyAaron E. SonshineAlexandra Doane
Oliver Loxley, Aaron E. Sonshine & Alexandra Doane