Blog

OSB Issues New Guidance : 
What Licensed Insolvency Trustees Must Know about AI Tools

Benjamin Reingold, Jesse Mighton, Nyrie Israelian, Shawn Kirkman and Stephen Burns
October 28, 2025
Cyber security concept.
Social Media
Download
Download
Read Mode
Subscribe
Summarize

In recognition of the widespread adoption and integration of generative artificial intelligence (AI) within organizations, Canadian regulators, public bodies and courts have and continue to release guidance to mitigate the potential risks of using AI tools. On October 14, 2025, the Office of the Superintendent of Bankruptcy (OSB) published guidance regarding the use of AI tools by Licensed Insolvency Trustees (LITs) and their staff, in efforts to protect the integrity of the Canadian insolvency system (the Guidance).

This blog provides a timely update to our recent blogs, AI in Canada: The Latest from Regulators, Courts and Public Bodies and Requirements and Guidelines From Canadian Regulators, Public Bodies and Courts for the Use of Artificial Intelligence regarding Canadian regulatory guidance on AI usage by individuals and organizations.

Summary of OSB guidance on the use of AI tools

The OSB is responsible for administration of the Bankruptcy and Insolvency Act and has certain duties under the Companies' Creditors Arrangement Act. The OSB licenses and regulates the insolvency profession. As part of these obligations, the Guidance serves as a pointed reminder to LITs and their staff to apply diligence and the highest standards of professionalism when using AI in all aspects of their work.

The OSB offers five key points for LITs to consider, emphasizing the importance of LITs to ensure that proper guidance, training and protocols are in place to protect against some of the known pitfalls of AI tools. These five points are:

  1. Maintaining human oversight;
  2. Vigilance;
  3. Accuracy;
  4. Cybersecurity and privacy; and
  5. Transparency

LITs and their staff are encouraged to consult any additional guidance specific to their jurisdiction and to pursue ongoing professional development in the ethical and secure application of AI tools as technology continues to evolve.

Takeaways for LITs

LITs should be mindful that their professional obligations extend to the use of AI by their staff, advisors and consultants. As trusted advisors and often court officers, LITs are central to Canada's insolvency system and assist both commercial and consumer debtors and their stakeholders with their most sensitive, complex and consequential matters, including, among other things:

  • drafting court documents, including reports to the Court;
  • preparing and filing other insolvency-related documents with the OSB;
  • ·verifying the claims of a debtor's creditors and interfacing with a wide range of stakeholders; and
  • facilitating the sale of a debtor's assets and administering distributions to creditors.

Although AI may create increased efficiency in these pursuits, this technology does not replace an LIT's responsibility for the proper carriage of its mandate in accordance with their professional obligations. Principles of integrity, transparency, and professional responsibility still prevail when generative AI tools are used.

The Guidance serves to supplement the professional obligations and responsibilities of LITs both as officers of the Court and regulated service providers. Such obligations and responsibilities include being accountable for the completeness, accuracy and protection of privacy in all materials they produce. These obligations extend and may be increasingly important to the production of documents using AI.

As LITs explore and increase their use of AI they should be mindful of the evolving regulatory regime's application to their professional obligations and consult the Guidance, as well relevant regulations and guidance published by provincial Courts, law societies and registrars.

Takeaways

Across Canada, instances of counsel and self-represented litigants being sanctioned for failure to disclose AI use and for AI-generated "hallucinations" of authorities continue to arise despite the issuance of guidance and warning from various authorities. In light of the OSB’s newly published guidance, which specifically notes that "AI is not a replacement for human judgment and oversight", LITs as well must exercise heightened diligence in their use of AI tools, as failure to adhere to these standards may result in strict consequences.

Organizations should remain vigilant and mitigate the risk of relying on AI by: (i) ensuring all output is reviewed for accuracy; (ii) when appropriate, labelling work product that was produced with the input of generative AI, and (iii) implementing robust governance and risk management relating to the use of AI, including with respect to protecting private, confidential, and/or privileged information.

If you have any questions about how your organization may use and implement AI, we invite you to contact one of the authors of this article.

Please note that this publication presents an overview of notable legal trends and related updates. It is intended for informational purposes and not as a replacement for detailed legal advice. If you need guidance tailored to your specific circumstances, please contact one of the authors to explore how we can help you navigate your legal needs.

Social Media
Download
Download
Subscribe
Republishing Requests

For permission to republish this or any other publication, contact Amrita Kochhar at kochhara@bennettjones.com.

For informational purposes only

This publication provides an overview of legal trends and updates for informational purposes only. For personalized legal advice, please contact the authors.