Blog

Hiding in the Shadows: 
The Perils of Shadow AI on Your Organization

Sebastien Gittens, Stephen Burns and Ahmed Elmallah
January 10, 2026
Busy Modern Office Interior With Blurred Workers
Social Media
Download
Download
Read Mode
Subscribe
Summarize

Artificial Intelligence (AI) is continuing to transform workplaces at a rapid pace; however, alongside the often-sanctioned deployment of tools like Microsoft Copilot or Google Gemini, a quieter trend has emerged: Shadow AI. For organizations, this is not just an IT issue; it’s a compliance, governance and liability challenge that requires organizations to acknowledge and to take meaningful action to address.

What Is Shadow AI?

Shadow AI occurs when employees adopt AI tools to accelerate or automate tasks, such as drafting documents, summarizing meetings or analyzing data, without their organization's knowledge or approval.

Notably, the prevalence of shadow AI has been reported to be significant: a recent report from MIT’s Project NANDA states that only 40% of companies that were surveyed purchased an official LLM subscription, but employees from over 90% of the companies surveyed reported regular use of personal AI tools.

Why Do Employees Turn to Shadow AI

Employees’ reliance on shadow AI are likely to be driven by a combination of practical and organizational factors. For example, employees may turn to shadow AI to cope with increasing productivity pressures and to meet demanding timelines more efficiently. Additionally, consumer AI platforms (like ChatGPT) are readily accessible—making them an easy and immediate option for employees seeking quick support or enhanced efficiency. In environments where approved or enterprise-grade AI solutions are limited, slow to deploy or non-existent, employees may default to shadow AI to fill the gap. This behaviour may be further reinforced by: (i) unclear internal policies; (ii) official tools that don't adequately address employee needs; (iii) limited training on official tools; and (iv) a general lack of awareness about the risks of using unauthorized AI systems. In some cases, employees are also motivated by a desire to innovate, experiment or keep pace with peers who are informally adopting unauthorized tools, which collectively accelerates increased reliance on shadow AI. 

Key Risks for Organizations

While employees may turn to unapproved AI tools with no ill-intent, the consequences to the organization can be significant.

For example, in the context of shadow AI, recent research suggests more than 80% of shadow AI (LLM) use in an organization may be for the employees to obtain practical guidance, seek information, or improve their communication skills with an unauthorized tool.

As a result, where employees rely on shadow AI tools, the organization may be exposed to the risks that:

  • the employee is relying on a shadow AI tool for decision making;
  • the employee is relying on inaccurate or misleading information or outputs (e.g. as some generative AI tools may "hallucinate" or "confabulate"); and
  • the employee is failing to develop the skills and experience required to succeed in their role and advance within the organization

Accordingly, an employee using shadow AI to summarize materials that they do not themselves understand to support a decision they need to make, is unlikely to be able to identify the errors or omissions in such summary, or the risks introduced into the decision-making process. 

These risks have wide ranging implications for the organization in respect of governance, controls, audit and employee development. 

Other risks to an organization include:

  • employees rarely perform the necessary due diligence on an unapproved AI tool prior to using it. This results in many open questions regarding the vendor, the tool, the tool's performance/effectiveness, etc.;
  • unapproved tools often lack adequate safeguards, creating serious security vulnerabilities;
  • when an organization's confidential information (e.g. meeting transcripts, contracts, client details, or trade secrets) is uploaded into shadow AI, organizations lose control over how any such data may be used, stored, protected or disclosed;
  • the use of unapproved AI tools may create regulatory non-compliance. From a privacy perspective, for example, non-compliance can arise from: (i) the unauthorized disclosure of personal information to unapproved, third party AI provider; (ii) the unauthorized use of such information by that provider; and (iii) the unauthorized transfer of such information by that provider to jurisdictions without the necessary consents and/or privacy impact assessments;
  • if a provider of an unapproved tool suffers a breach, the organization may not even be aware that personal information under its control has been compromised;
  • as we discussed in a prior blog, vendor agreements frequently provide minimal assurances regarding the performance or reliability of the application, and include sweeping limitations of liability through which vendors seek to exclude responsibility for virtually any liability;
  • shadow AI bypasses an organization's controls, creating security blind spots that the organization may not be able to effectively detect or monitor;
  • shadow AI can lead to technical inefficiencies (e.g. through the creation of fragmented data flows, redundant integrations, and unsupported endpoints that disrupt the coherence and scalability of the organization’s architecture);
  • shadow AI can also lead to financial inefficiencies (i.e. as an organization may be paying for approved solutions that employees are ignoring in favor of unauthorized alternatives); and
  • finally, a single incident involving, for example, an organization's confidential information, can: (i) create reputational harm; and (ii) invite costly litigation.

Strategies to Mitigate the Risks of Shadow AI

Organizations should adopt a proactive and structured approach to managing the risks associated with unauthorized AI usage. This begins with implementing clear governance policies that define which AI tools are approved, the circumstances under which they may be used, and strict rules prohibiting the upload of sensitive or confidential data to unapproved platforms. These policies should be integrated into the organization’s broader compliance and governance frameworks to ensure consistency and enforceability.

Strengthening training and awareness programs is equally important. Mandatory AI literacy initiatives should educate employees on their responsibilities when interacting with AI tools and highlight the legal, regulatory and organizational implications of shadow AI. Building awareness fosters responsible behavior and reduces inadvertent risk.

Visibility matters: without monitoring, policies may have limited utility. Accordingly, organizations should implement tools that track AI usage across the enterprise and conduct regular audits to detect the unauthorized use of shadow AI. Early detection enables timely remediation and reduces the likelihood of systemic risk.

Finally, reducing reliance on unapproved platforms requires offering enterprise-grade alternatives. Providing sanctioned AI tools with robust security and data governance controls, while ensuring they are user-friendly, accessible and efficient, encourages adoption and minimizes the temptation for employees to turn to shadow AI.

Conclusion

Shadow AI is a structural risk that can expose organizations to, among other things, loss of confidential information, regulatory non-compliance, reputational harm, and costly data breaches. The solution is not to ban AI outright—such measures would: (i) unlikely succeed at reducing the incidence of shadow AI; and (ii) hinder the organization's potential to leverage the benefits of this technology. Instead, the most effective approach is to implement controlled enablement, transparency and strong governance.

By taking proactive steps, organizations can empower their employees to harness AI responsibly. Such measures should strike the right balance between innovation and compliance, ensuring it remains competitive and secure in an AI-driven world.

If you would like guidance on designing governance frameworks or reviewing your current AI policies, we invite you to contact one of the authors of this blog post.

Social Media
Download
Download
Subscribe
Republishing Requests

For permission to republish this or any other publication, contact Peter Zvanitajs at ZvanitajsP@bennettjones.com.

For informational purposes only

This publication provides an overview of legal trends and updates for informational purposes only. For personalized legal advice, please contact the authors.

Latest Insights

See All Insights
Hiding in the Shadows The Perils of Shadow AI on Your Organization
Blog

Hiding in the Shadows: The Perils of Shadow AI on Your Organization

January 10, 2026
J. Sébastien A. GittensStephen D. BurnsAhmed Elmallah
J. Sébastien A. Gittens, Stephen D. Burns & Ahmed Elmallah