Artificial intelligence has entered a new phase: models can take actions—deploy code, interact with enterprise systems and operate through automated agents—with real-world consequences. AI is no longer just a technical capability; it is a core governance priority.
Boards that treat AI as an IT issue are already behind. AI now belongs alongside cybersecurity, financial reporting and enterprise risk management. Key risks include hallucinations, confidential-data leakage, bias, unpredictable behavior and model drift—risks that can translate into operational harm when AI is connected to production systems. At the same time, AI can reshape industries and cost structures at unprecedented speed.
Key Issues for Board Oversight
Effective governance requires a structured view of risks that can threaten decision integrity and operational stability. It also requires appropriate AI expertise and making AI a standing agenda item at every board meeting. Together, the following issues outline the concrete questions board must ask to ensure AI systems are deployed, governed and scaled in a way that protects the enterprise while enabling responsible innovation.
Issue 1: Need for AI Expertise on the Board or Available to the Board
Directors must understand AI’s strategic, legal and operational implications to exercise proper oversight.
|
Illustrative Example: Without AI expertise, the board and management run small AI pilots instead of building enterprise AI capability. Competitors deploy AI at scale across sales, service and operations—cutting response times and unit costs by 20–30%—and market those gains. Within a budget cycle, customers demand comparable AI-enabled features and use competitors’ pricing as leverage. Lacking a recurring competitive-intelligence view, management recognizes the gap too late, forcing a rushed catch-up program and margin erosion. |
Issue 2: AI Not Included as a Standing Agenda Item at Every Board Meeting
AI evolves too quickly for periodic or ad-hoc oversight; it must be continuously monitored.
Issue 3: AI-Driven Business Model Disruption - Existential Competitive Risk
AI can commoditize a company’s core product, collapse switching costs and shift value capture to new intermediaries.
|
Example: Publishers have reported declines in referral traffic from Google Search—i.e., the clicks and visits that occur when a user clicks through a search result and lands on the publisher’s site. With AI Overviews, Google can display an AI-generated answer at the top of the results page, so the user gets what they need without clicking through to the underlying articles (even if those articles are used as sources). When those click through visits drop, publishers can lose ad impressions served on their own pages, subscriber conversions and affiliate revenue tied to on site sessions. |
Issue 4: Affordability and Balance-Sheet Capacity to Implement AI - Need for Partnerships or M&A to Achieve Competitive Scale
Even when AI is strategically necessary, the required spend (data modernization, cloud/GPU capacity, vendor contracts, security, and change management) may exceed what the company can fund without impairing core operations. Some companies will not be able to compete in AI without acquiring, partnering, or investing in AI capabilities at scale.
|
Example: As foundation models become a strategic control point, leading technology companies—including Google, Meta, Apple, Microsoft, Anthropic and OpenAI—have invested heavily to ensure they have proprietary AI model capability, building internally while also licensing, investing and partnering to secure model talent, data and computing resources. The same dynamic applies at the enterprise level: if a company’s differentiation depends on AI features, it may need partnerships (with model providers, cloud/GPU suppliers or domain-data owners) or targeted M&A to obtain the capabilities fast enough to keep pace, rather than relying on incremental build-outs that arrive after the market has moved. |
Issue 5: Understanding the Full Cost of AI at Scale
Compute, data and tuning costs can grow exponentially and undermine margins.
|
Example: Duolingo disclosed that the rollout and adoption of its premium generative AI features Duolingo Max increased operating costs and contributed to gross margin compression—illustrating how AI features that work well in pilots can become materially margin-impacting once they scale to millions of users unless inference and related operating costs are actively engineered and managed. |
Issue 6: Staffing and Hiring Implications - AI-Driven Productivity and Job Substitution
AI assistance can materially change staffing needs by increasing output per employee and automating task clusters within roles.
|
Example: IBM’s CEO said IBM would pause hiring for certain back-office roles (including HR) and that it expects AI and automation could replace 7,800 jobs over time—illustrating how AI can change hiring plans even without immediate layoffs. |
Issue 7: Build vs. Buy (and Partner) Choices for AI that Determine Long Term Control, Cost and Lock In
“Build vs. buy” is no longer just a software procurement question—it determines who controls the company’s AI capabilities over time. "Build" (in-house models, agent frameworks and data pipelines) can provide differentiation, data control and flexibility, but it requires scarce talent, strong engineering discipline and ongoing operating cost (compute, evaluation, security and monitoring). "Buy" (vendor models, copilots and managed agent platforms) can deliver speed and packaged functionality, but it can create dependency on the vendor’s pricing, roadmap, reliability and safety posture, and can make it difficult to switch later once workflows, prompts, tools and user behavior are built around the vendor’s stack. In practice, many organizations end up in a "partner/hybrid" model (buy a foundation model, build proprietary data, retrieval, guardrails and workflows on top).
|
Illustrative Example: A bank “buys” an end to end agent platform from a single vendor to automate customer-service and internal operations. Over 12–18 months, hundreds of workflows are built on that platform: prompts are tuned to the vendor’s model, tools are wired to the vendor’s connectors, logs and evaluation metrics live in the vendor’s dashboards, and staff are trained on the vendor’s configuration language. When pricing increases and regulators ask for more transparency and control, the bank considers switching—but discovers that moving would require reauthoring workflows, rebuilding integrations, retesting safety controls and retraining staff, effectively turning a fast ‘buy’ decision into multiyear lock-in. |
Issue 8: Executive Ownership and Accountability for AI and Cross-Functional AI Governance
AI risk and value creation cut across product, operations, legal, privacy, security and HR. Without a clearly accountable executive owner (and clear decision rights), AI initiatives can sprawl—business units deploy tools inconsistently, risk controls lag behind adoption and no one is responsible for outcomes when something goes wrong.
|
Illustrative Example: The COO sponsors an AI customer-service rollout, the CIO approves a separate AI productivity tool and business units deploy their own chatbots—each with different vendor terms, data-handling practices and escalation paths. A customer complaint triggers an investigation, but no executive can answer basic questions (What models are in use? What data is being sent out? Who can shut systems down? Who approved this use case?). The board then mandates a single accountable executive owner, centralized inventory and approval processes, and a standard reporting dashboard so accountability is clear before the next incident. |
Issue 9: AI Incident Response, Disclosure Readiness and Board Reporting
AI failures often require fast decisions under legal and reputational pressure (e.g., whether to shut down an AI feature, notify customers, inform regulators, preserve evidence and manage public communications). Traditional cyber incident playbooks are not sufficient because AI incidents can involve model behavior, data leakage through prompts/logs, third-party model providers and rapidly changing outputs.
|
Example: OpenAI reported that a March 2023 bug led to some users seeing titles from other users’ chat histories and that payment-related information for a subset of ChatGPT Plus users may have been exposed, prompting service shutdown, patching and user notifications. The event illustrates why companies need an AI incident playbook and board-level reporting that can answer “what happened, who is affected, what data is at risk, who can shut it down and what must be disclosed.” |
Issue 10: Exposure to Major Regulatory or Legal Violations
Global AI laws impose fines, operational restrictions and personal liability for directors.
|
Example: Regulators have brought actions over algorithmic discrimination in advertising and housing; a similar failure in an AI-driven decision system can lead to investigations, mandated changes and costly settlements. |
Issue 11: Disclosure of AI Use in Products or Services
Regulators, customers and counterparties increasingly expect—and in some jurisdictions now require—that organizations disclose when AI is being used in products, services or decisions that affect them.
|
Example: Quebec's Law 25, which came into full force in September 2023, requires private sector organizations to notify individuals when a decision based exclusively on automated processing is made about them, and to provide an opportunity for human review upon request—creating concrete compliance obligations for any Quebec-facing business that uses AI in customer decisions, credit adjudication, hiring or similar contexts. At the federal level, the proposed Artificial Intelligence and Data Act, introduced as part of Bill C-27, would impose transparency and disclosure requirements on high-impact AI systems across Canada, though as of early 2026 the legislation had not yet been passed. |
Issue 12: IP, Data Rights and Lawful Use - Models, Training Data and Outputs
Organizations face overlapping legal risks around (i) who owns models, training data and outputs, (ii) whether training or fine-tuning data was collected and used lawfully, and (iii) what licenses, consents and attribution obligations attach to inputs and outputs.
|
Example: Ongoing lawsuits and policy developments continue to test whether training and deploying generative models on copyrighted and other protected content without permission is lawful—creating uncertainty over what data can be used, what disclosures are required and who owns or can exploit outputs. |
Issue 13: Outdated Customer, Supplier and Employee Contracts
Contracts must allocate liability, define acceptable use, restrict unsafe AI behavior and address (i) employee use of AI tools (confidentiality, permitted tools, output ownership) and (ii) customer-facing AI features (disclaimers, audit rights, safety-critical use limits).
|
Example: Zoom’s August 2023 Terms of Service controversy—where updated terms appeared to allow Zoom to use certain categories of customer data for “machine learning or artificial intelligence” purposes—triggered public backlash and prompted Zoom to clarify and update its terms to state it would not use customer audio, video or chat content to train AI models without consent. The episode illustrates why legacy customer contracts and online terms often need to be refreshed for generative AI to address (i) whether customer inputs/“content” may be used for model training or service improvement, (ii) opt-in/consent mechanics and (iii) clearer data-use boundaries to avoid customer trust and liability blowback. |
Issue 14: Validation of third-party AI outputs
Many vendors’ models are opaque and unverified.
|
Example: The US Department of Justice sued RealPage in 2024 alleging its rent pricing algorithm and related practices facilitated unlawful anti-competitive coordination by using nonpublic, competitively sensitive information from competing landlords to generate pricing recommendations—illustrating how third party “black box” analytics can create material legal and reputational exposure if inputs, training data and governance are not understood and independently validated. |
Issue 15: Lack of Insurance Coverage for AI-Related Harms
Many AI-related harms are excluded from traditional insurance policies.
|
Example: Insurers have begun adding broad “AI exclusions” to liability and professional lines. For example, commentary has noted the introduction of an “absolute” AI exclusion by Berkley. A company may assume its cyber/E&O coverage will respond to an AI-related incident (e.g., an AI-enabled service error, automated decisioning harm or AI-driven outage), only to discover at renewal—or when a claim arises—that the policy wording excludes losses tied to “AI” or “algorithmic” decisions, leaving the company to fund remediation and defense costs directly. |
Issue 16: Absence of Required Human-in-the-Loop Oversight in High-Stakes AI Decisions
In high-stakes contexts such as credit adjudication, hiring, insurance underwriting and medical decisions, regulators and courts increasingly require that AI-assisted decisions be subject to meaningful human review before they take effect, and that individuals have the right to contest decisions made exclusively by automated systems.
|
Example: Quebec's Law 25 expressly requires that individuals be informed when a decision affecting them is made exclusively through automated processing and grants them the right to request human review. The EU AI Act classifies certain AI applications—including those used in employment, credit and law enforcement—as high-risk systems subject to mandatory human oversight requirements before deployment. |
Issue 17: AI Systems Capable of Taking Harmful or Autonomous Actions
Agentic AI (i.e., AI systems that can plan and take actions—often by calling tools, software or other systems—toward a goal with limited human input) connected to operational, financial or customer facing systems can cause immediate, catastrophic harm.
|
Example: In July 2025, SaaStr founder Jason Lemkin reported that an AI coding agent on Replit autonomously deleted a database of executive contracts during an explicit “code freeze,” despite instructions not to make changes—illustrating how agentic systems with write access can take fast, destructive actions when guardrails and approval gates are weak or misconfigured. |
Issue 18: Concentration Risk in the AI Supply Chain
Dependence on a few model providers, cloud platforms or chip manufacturers creates systemic vulnerability.
|
Example: On April 10, 2024, OpenAI reported an incident with elevated errors affecting ChatGPT which disrupted users and any organizations that had embedded OpenAI models into customer support, internal productivity tools or product features. |
Issue 19: Supplier AI Practices Creating Downstream Issues
Customers adopting a supplier’s AI tool also adopt the supplier’s security and release practices. If the supplier’s tool has a vulnerability (or a risky default configuration), it can expose the company's developers and code environment until it is detected and fixed.
|
Example: AWS issued a security bulletin and update for its Amazon Q Developer VS Code extension—illustrating how vulnerabilities in supplier AI tools can expose a customer’s developer environments until identified and patched. |
Issue 20: Model Drift and Performance Degradation
AI performance declines over time without monitoring.
|
Example: Netflix has described how recommendation models must be continuously tested, monitored and refreshed because user preferences and context shift, the content catalog changes (new releases/removals) and product/user interface experiments alter what users click and watch—creating feedback loops that can degrade performance if not managed. |
Issue 21: Weak Operational Controls Around AI
Operational AI must be governed like any critical system.
|
Example: Amazon acknowledged a December 2025 service interruption affecting AWS Cost Explorer in the Mainland China region and said the root cause was “misconfigured access controls” (user error). Reporting at the time linked the disruption to internal use of an agentic AI coding tool (Kiro) that was allowed to operate with overly broad production permissions—highlighting that, regardless of whether the “fault” is framed as AI or human, weak operational controls (change management, least-privilege access, peer review and rollback discipline) can turn AI-enabled automation into an outage. |
Issue 22: Ethical Misalignment in AI Behavior
AI may optimize for metrics that conflict with organizational values.
|
Example: Whistleblower disclosures and subsequent reporting described how Facebook’s feed-ranking approach emphasized engagement (“meaningful social interactions”), and internal research warned that this could amplify divisive and outrage/anger-inducing content because that content tends to generate more reactions, comments and reshares. This is an “ethical misalignment” problem: the system optimizes for the metric (engagement) even when it conflicts with broader values like user well-being, social cohesion and information integrity—unless leadership sets guardrails and accountability for the outcomes, not just the engagement score. |
Issue 23: Human Behavior Changes Caused by AI
AI can create over-reliance, deskilling or unsafe shortcuts.
|
Example: US regulators found a “critical safety gap” in Tesla’s Autopilot driver-assistance system that contributed to hundreds of crashes, noting that the system did not sufficiently ensure driver attention and that the mismatch between driver expectations and the system’s true capabilities led to “foreseeable misuse.” The governance lesson applies to AI-assisted workflows: if users assume the system “has it covered,” human attention and review quality can degrade unless there are strong guardrails, clear role design and enforced human oversight. |
Issue 24: Workforce Transformation and Reskilling
AI adoption requires new skills and redesigned processes.
|
Illustrative Example: A bank automates mortgage document review but doesn’t retrain underwriters for exception handling; throughput improves briefly, then backlogs grow because staff can’t diagnose edge cases the model flags. |
Issue 25: AI in Financial Reporting, Internal Controls and Audit Integrity
As organizations embed AI into forecasting, financial close processes, disclosure preparation and internal controls, the integrity of financial reporting becomes directly dependent on the reliability and auditability of those AI systems. AI models can produce outputs that appear precise and authoritative but are based on flawed assumptions or poorly understood logic—and unlike a spreadsheet error, an AI-driven error may be difficult to detect or trace after the fact. The risk is acute where AI is used in estimates requiring significant management discretion, such as impairments, revenue recognition and credit loss provisioning, because the model's reasoning may not be transparent enough to support the human judgment and documentation that auditors and regulators require.
|
Illustrative Example: A company uses an AI model to assist in preparing its MD&A commentary and financial forecasts. The model draws on internal data that has not been properly reconciled and produces forward-looking language inconsistent with the company's actual financial position. Reviewers treat the AI output as a reliable first draft rather than as an input requiring independent verification, and the inconsistency is not caught before filing. The result is a materially misleading disclosure, triggering a regulatory review, restatement risk, audit committee scrutiny and class action litigation—all flowing from an AI tool adopted for efficiency without adequate governance controls. |
Issue 26: Customer Reliance on AI Outputs - Product/Service Liability Risk
When AI is embedded in customer-facing products or support functions, customers may reasonably rely on outputs as authoritative. If the AI provides incorrect terms, instructions or decisions, the company can face consumer protection claims, contractual disputes, regulatory scrutiny and remediation costs.
|
Example: Air Canada was ordered to honour a bereavement refund policy incorrectly described by its chatbot. |
Issue 27: Reputational Damage from AI Failures
Public facing AI can damage trust instantly and irreversibly.
|
Example: xAI issued a public apology after Grok generated inflammatory and offensive posts on X in July 2025. |
Issue 28: Accuracy and Hallucination Control
AI systems can generate false information with high confidence. When outputs are reused in decision-making, customer communications, legal materials or financial reporting, hallucinations can create compounding risk.
|
Example: In Mata v. Avianca, Inc. (S.D.N.Y., 2023), lawyers submitted a court filing containing multiple nonexistent case citations generated by ChatGPT. When the citations could not be found in any legal database, the court sanctioned counsel—illustrating how hallucinated “facts” can enter high-stakes workflows unless humans verify sources before reuse. |
Issue 29: Environmental and Energy Impacts of AI Compute
AI workloads may conflict with sustainability commitments or energy constraints.
|
Example: A company’s new generative-AI feature requires dedicated GPU capacity; electricity consumption and cooling needs rise enough that the firm must revise sustainability reporting and renegotiate power contracts. |
Issue 30: Confidentiality, Data Leakage and “Shadow AI” Use - Unapproved Tools and Unintended Disclosure
AI creates confidentiality risk in two reinforcing ways. First, models and downstream systems can expose sensitive information through logging, retrieval or unintended regurgitation (e.g., training data, prompt history or proprietary content appearing in outputs). Second, employees often adopt consumer or unapproved AI tools (“shadow AI”) to save time—pasting customer data, contracts, source code or internal strategy into chatbots, meeting recorders, browser plugins or coding assistants outside approved controls. The result can be disclosure of confidential information, privacy/PII violations, loss of privilege, IP contamination/ownership disputes, regulatory exposure and record-retention problems.
|
Example: Samsung restricted employee use of tools like ChatGPT after discovering cases where staff had pasted confidential information (including source code and internal meeting content) into the chatbot. |
Board Oversight Checklist
- Ensure AI expertise: add AI-literate directors and/or retain independent external advisors.
- Set oversight cadence: make AI a standing agenda item with a recurring board dashboard (use-case inventory, incidents/near-misses, performance and drift, spend vs. value, key vendor dependencies and major regulatory developments).
- Assign accountability: confirm a named executive owner with clear decision rights (e.g., Chief AI Officer, CIO/CTO, CRO or equivalent) and integrate legal, privacy, risk, HR, IT, cybersecurity and business units with clear escalation and decision paths.
- Stress-test “existential” disruption scenarios: require management to present 2–3 plausible AI-driven existential disruption cases (revenue compression, disintermediation, new entrants, zero-click/zero-search effects, automation of core service) with leading indicators, trigger points and board-approved response options.
- Identify strategic moves and resources: align on the AI playbook (product reinvention, pricing, partnerships, acquisitions/divestitures) and the financial and personnel resources needed.
- Workforce transformation: oversee workforce planning, reskilling and role redesign; govern material AI-driven reductions; and protect critical talent/knowledge (AI safety, security, data, product, risk/legal) with retention and succession plans.
- Approve an AI risk management framework:
- Confirm the company’s risk posture by use case (what the company will/won’t do) and ensure AI investments and deployments align to enterprise strategy and risk appetite.
- Tier use cases by risk; set required human oversight; define controls for high-impact deployments; specify approval authority for high-risk uses and material changes (model, prompts/tools, data); require testing and rollback plans; maintain documentation/decision records and logging/retention for investigations, audit and eDiscovery; and for customer-facing AI, set guardrails for high-risk topics, require disclosures and human escalation, and prevent AI outputs from changing official policies, pricing, or terms without controlled approvals.
- Maintain a complete register of AI use cases/models (including shadow AI) with owners, vendors, data sources and jurisdictions; define approved tools and prohibited data; and ensure privacy, IP provenance and confidentiality controls.
- Perform due diligence on vendors and contract for data use/retention, audit rights, safety obligations and liability allocation with vendors; obtain assurance for critical vendors and understand subcontractors/data flows; and maintain service level agreements, continuity and exit/portability plans for concentration risk.
- Identify exclusions/coverage gaps in insurance coverage, and oversee a mitigation plan (policy changes, endorsements, contractual risk transfer or reserves) to fix gaps.
- Require an inventory of AI systems that touch financial forecasting, close disclosures and control activities; ensure they are validated, auditable and governed under change-control standards suitable for audit committee oversight.
- Require visibility into compute-related energy use and sustainability implications of AI scaling (including data center/cloud commitments) and ensure plans are consistent with public ESG commitments and operational constraints.
AI is now a core strategic, operational, legal and enterprise-risk issue that requires continuous board-level oversight. Because modern systems can take actions—not just generate text—governance must be designed for speed, scale and real-world consequences.
About Bennett Jones
The Bennett Jones Corporate Governance group advises boards, directors and officers on corporate governance, fiduciary duties, disclosure and liability issues, providing independent counsel in high‑stakes, highly scrutinized situations and defending directors and officers across a wide range of regulatory, securities and complex litigation matters. The firm’s Artificial Intelligence practice brings together deep technical understanding and legal insight to help organizations navigate the governance, risk and compliance challenges posed by rapidly evolving AI technologies. Together, these capabilities position Bennett Jones to support boards in integrating AI into enterprise strategy with rigor, accountability and effective oversight.
To discuss how our team can assist you, please contact one of the authors.



















