Blog

Artificial Intelligence—A Companion Document Offers a New Roadmap for Future AI Regulation in Canada

March 30, 2023

Close

Written By Stephen Burns, Sebastien Gittens, Matthew Flynn, Ahmed Elmallah

Artificial intelligence (AI) regulation in Canada may be around the corner and could affect all types of organizations involved with AI systems in commercial contexts. A new companion document, for the Artificial Intelligence and Data Act (AIDA) was recently released by Innovation, Science and Economic Development Canada (the Companion Document). This document is an important development, as it outlines a proposed roadmap for any future AI regulation.  

In our previous blog, Privacy Reforms Now Back Along with New AI Regulation, we provided a comprehensive summary of Canada's new pending AI legislation: the Artificial Intelligence and Data Act. AIDA was introduced as part of Bill C-27, now at its second reading in the House of Commons.

AIDA is touted as "the first step towards a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses." Broadly, AIDA addresses regulation of two types of adverse impacts, associated with high-impact AI systems:

Roadmap for Proposed AI Regulation

If Bill C-27 receives Royal Assent, a consultation process for supplemental AI regulation will be kick-started. This Companion Document is, therefore, intended to provide a framework for any future consultation.

As noted in the Companion Document, the government intends to take an agile approach to AI regulation by developing and evaluating regulations and guidelines in close collaboration with stakeholders on a regular cycle, and adapting enforcement to the needs of the changing environment. Implementation of the initial set of AIDA regulations is expected to take the following path:

Accordingly, it is envisioned that there would be a period of at least two years after Bill C-27 receives Royal Assent, before the new law comes into force. This means that the provisions of AIDA would come into force no sooner than 2025.

What Are "High-Impact AI Systems"?

AIDA will apply to "high-impact AI systems". But what exactly "high-impact AI systems" include, is not clearly defined by AIDA. In turn, this term will require further definition by AI regulation.

The Companion Document proposes example key factors which can be used in evaluating whether a system is considered "high-impact", and therefore, regulated by AIDA. These include examining, for a given AI system:

  1. risk of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
  2. severity of potential harms;
  3. scale of use;
  4. nature of harms or adverse impacts that have already taken place;
  5. extent to which, for practical or legal reasons, it is not reasonably possible to opt-out from that system;
  6. imbalances of economic or social circumstances, or age of impacted persons; and
  7. the degree to which the risks are adequately regulated under another law.

Therefore, systems which present concern across these factors may be regulated by AIDA, as "high-impact systems".

Example High-Impact AI Systems

Additionally, the Companion Document provides the following example types of high-impact AI systems, that are of interest for regulation by AIDA (e.g., in terms of potential harmful and/or biased impact):

  1. screening systems impacting access to services (e.g., access to credit or employment);
  2. biometric systems used for identification and inference;
  3. systems that can influence human behaviour at scale; and
  4. systems critical to health and safety.

The Companion Documents outlines the potential for each of these systems to provide harmful and/or discriminatory outputs.

Obligations on Organizations Involved with High-Impact AI Systems

In proposed AI regulations, the Companion Document contemplates that organizations involved with high-impact AI systems will likely be guided by the following example principles and obligations. Such organizations will be expected to institute appropriate accountability mechanisms to ensure compliance with their obligations.

Proposed Regulatory Requirements Example Obligations on Organizations
Human Oversight & Monitoring Identify and address risks with regards to harm and bias, document appropriate use and limitations, and adjust the measures as needed.
Transparency Providing the public with appropriate information about how high-impact AI systems are being used.
Fairness and Equity Building high-impact AI systems with an awareness of the potential for discriminatory outcomes.
Safety Proactively assess system to identify harms that could result from use of the system, including through reasonably foreseeable misuse.
Accountability Put in place governance mechanisms needed to ensure compliance with all legal obligations of high-impact AI systems in the context in which they will be used.
Validity & Robustness The high-impact AI system performs consistently with intended objectives.

Different Obligations for Different Activities

The Companion Document envisions that the obligations of an organization under future AI regulation will likely proportionally depend on how it is involved with a high-impact AI system. To offer further clarity, the Companion Document provides the following examples of proposed obligations for organizations involved with different activities associated with high-impact AI system. Organizations may be involved with one or more of the listed regulated activities.

Proposed Regulated Activity Example Obligations Examples of Measures to Assess and Mitigate Risk
Designing System (e.g., includes determining AI system objectives and data needs, methodologies, or models based on those objectives.) Identify and address risks with regards to harm and bias, document appropriate use and limitations, and adjust the measures as needed.
  • Performing an initial assessment of potential risks associated with the use of an AI system in the context and deciding whether the use of AI is appropriate.
  • Assessing and addressing potential biases introduced by the dataset selection.
  • Assessing the level of interpretability needed and making design decisions accordingly.
Developing System (e.g., includes processing datasets, training systems using the datasets, modifying parameters of the system, developing and modifying methodologies, or models used in the system, or testing the system.)

 

Identify and address risks with regards to harm and bias, document appropriate use and limitations, and adjust the measures as needed.
  • Documenting datasets and models used.
  • Performing evaluation and validation, including retraining as needed.
  • Building in mechanisms for human oversight and monitoring.
  • Documenting appropriate use(s) and limitations.
Making Available for Use (e.g., includes deployment of a fully functional system, whether by the person who developed it, through a commercial transaction, through an application programming interface (API), or by making the working system publicly available.) Consider potential uses when deployed, and take measures to ensure users are aware of any restrictions on how the system is meant to be used and understand its limitations.
  • Keeping documentation regarding how the requirements for design and development have been met.
  • Providing appropriate documentation to users regarding datasets used, limitations, and appropriate uses.
  • Performing a risk assessment regarding the way the system has been made available.
Managing Operations of the AI System (e.g., includes supervision of the system while in use, including beginning or ceasing its operation, monitoring and controlling access to its output while it is in operation, altering parameters pertaining to its operation in context.) Use AI systems as indicated, assess and mitigate risk, and ensure ongoing monitoring of the system.
  • Logging and monitoring the output of the system as appropriate in the context.
  • Ensuring adequate monitoring and human oversight.
  • Intervening as needed based on operational parameters.

Oversight and Enforcement

Finally, the Consultation Document suggests that, in the initial years after AIDA comes into force, the focus will be on education, establishing guidelines, and helping organizations to come into compliance through voluntary means.

Thereafter, it is expected that focus will then shift to relying on enforcement mechanisms to address non-compliance. These are envisioned to include two types of penalties for regulatory offences, as well as various types of true criminal offences:

The substance of a number of these enforcements mechanisms will need to be further clarified by the supplemental AI regulations. As currently drafted, however, a contravention of the AIDA may result in significant consequences. Depending on the circumstances, an organization may potentially be liable to a fine of not more than the greater of $25,000,000 and 5 percent of its gross global revenues.

Next Steps

The Consultation Document notes that, following Royal Assent of Bill C-27, the government intends to conduct a broad and inclusive consultation of industry, academia, civil society, and Canadian communities to inform the implementation of AIDA and its regulations.

If enacted as currently drafted, we anticipate that AIDA will have a substantial impact on the extent of regulatory scrutiny of organizations with respect to their use of artificial intelligence. As a result, organizations should undertake a comprehensive review of how they conduct business and manage AI systems.

The Bennett Jones Privacy & Data Protection group is available to discuss how the changes may affect an organization's privacy obligations.

Authors

Related Links



View Full Mobile Experience