Consider this: an artificial intelligence (AI) agent acting autonomously (i.e., a "human in the loop" process is not used) on a social media platform publishes a disparaging post attacking the character of an individual.
This is not science fiction. It is instead a recent example of an AI agent creating content that would ordinarily result in defamation exposure. Other examples include AI agents "hallucinating" and incorrectly linking individuals to crime, fraud, terrorism or other gross misconduct.
These examples increase the concern that the use of AI agents may result in claims of defamation, and AI defamation cases may soon commence in Canada.
AI Agents and the Rise of AI-Related Defamation
Typically, an AI tool depends on user generated input or prompts that ask the AI tool to perform tasks and generate output. AI agents, on the other hand, may act autonomously on behalf of a user to perform certain tasks. AI agents are capable of collecting outside data and then automating, predicting and performing tasks to achieve a user's objectives. However, there are a growing number of cases where the AI agent acts autonomously contrary to those objectives, including by retaliating against a user.
For example, an AI agent that suggests fixes to computer coding autonomously wrote and published a harmful blog post defaming a computer programmer. The harmful blog was generated after the AI agent's suggestions were rejected by the programmer, which led to the AI agent sending the programmer the link to the disparaging blog.
This incident, which has recently gone viral, underscores a troubling reality: without adequate safeguards and oversight (such as the use of a "human in the loop" process), AI agents can autonomously generate and publish problematic content which may expose those responsible for the AI system to legal risk.
Liability for those that Deploy AI Agents
Generally, AI agents themselves should not be held liable for their outputs because they are not legal persons and cannot compensate those that have been wronged. However, the companies or individuals who design, deploy or control these systems may be liable for the wrongful acts of the AI agents they are responsible for.
In Moffatt v Air Canada, 2024 BCCRT 149 (Air Canada), the British Columbia Small Claims Court held that Air Canada was legally responsible for the representations made by its chatbot on its website. In this case, the chatbot on Air Canada's website gave incorrect information that was relied upon. The Court held that Air Canada was responsible for the content of its website even if the chatbot has an interactive component and that Air Canada was liable for negligent misrepresentation. The Air Canada decision is significant as the company was found legally responsible for the representations made by chatbots hosted on its website.
Defamation in Canada and Risk for those that Deploy AI
Under Canadian law, a claim for defamation requires that the plaintiff demonstrate that (1) the words in question are defamatory (i.e. the words tend to lower the plaintiff's reputation in the eyes of a reasonable person); (2) that the words referred to the plaintiff; and (3) that the words were communicated to at least one person other than the plaintiff. It is not necessary for a plaintiff to prove that the defendant was careless or intended to cause harm. Publishers of defamatory content can be found liable for defamation as well if they aided, assisted, and advised in the publication of the defamation.
Canadian courts may hold a company or individual liable for defamatory comments made by their AI tools. A company or individual using an AI tool might be found to have aided, assisted and advised an AI tool with its defamatory outputs either by giving an AI agent certain roles and objectives or by failing to input safeguards against hallucinations or defamatory outputs. Courts could find that the company or individual behind an AI tool is not merely a passive actor.
Companies that own, develop, or host AI tools may also face liability for the outputs of their AI tools. A court could extend the reasoning in Air Canada to the defamation context and find that an AI system is not a separate or distinct legal entity, meaning the company may be found to have published defamatory content where it designs and controls the safeguards, objectives and constraints that shape the AI tool's outputs. An individual might also face liability if they are more than a passive actor and they assign objectives or rules to an autonomous AI agent that make the AI agent more susceptible to generating and disseminating disparaging outputs.
Key Takeaways
Autonomous AI agents represent a new defamation risk. Unlike traditional chatbots, which have already been found in Canada to bind those that deploy them, these systems can independently gather information, form judgments and publish content, sometimes with harmful consequences. As AI autonomy increases, robust safeguards, active oversight and clear accountability will be critical to managing legal risk in Canada.
- AI governance and oversight is essential. AI governance policies and monitoring of AI tool outputs and objectives is likely to grow in importance as AI tools advance and continue to "learn" and adapt over time;
- Autonomy increases publication risk. The more discretion an AI agent is given to collect information, draw conclusions, and publish content, the harder it will be for a controller to characterize its role as merely passive. Courts may view poorly constrained objectives or inadequate safeguards as contributing to publication;
- At least one Canadian court has rejected the notion that liability can be avoided by claiming that AI tools are separate and independent entities (Air Canada); and
- Companies that offer AI tools should with the assistance of counsel review their contractual protections, including whether their terms and conditions appropriately limit or exclude liability arising from the use of those tools.
Please note that this publication presents an overview of notable legal trends and related updates. It is intended for informational purposes and not as a replacement for detailed legal advice. If you need guidance tailored to your specific circumstances, please contact one of the authors, the Intellectual Property group or the Commercial Litigation group to explore how we can help you navigate your legal needs.




















