Workers Are Secretly Using Generative AI Tools, and It Needs to Stop
Shadow AI Use Is Growing Rapidly
A recent Deloitte report reveals a concerning trend: one in three UK workers are using generative AI tools without their employer’s explicit approval. This unauthorized usage, known as “shadow AI,” raises major concerns about cybersecurity and compliance.
Despite companies rapidly adopting generative AI solutions, many employees still bypass official channels. Nearly 30% are paying out of pocket for tools their companies haven’t sanctioned. Only 20% of users rely on in-house or company-commissioned AI systems.
Why Employees Go Rogue with AI
According to the survey, 40% of workers say they use AI tools without permission because they don’t perceive any risk. Meanwhile, 31% believe their company wouldn’t be able to detect their usage anyway. This lack of oversight has led to an increase in unregulated and potentially dangerous behavior.
Lorraine Barnes, Deloitte’s UK lead for generative AI, commented: “UK workers are taking matters into their own hands when it comes to keeping up to speed with the latest generative AI advances.”
She emphasized that the trend signals a need for companies to invest in official, secure GenAI tools that align with organizational goals. “If companies don’t start building GenAI strategies now, they risk falling behind their employees,” she warned.
Employees Are Enthusiastic About AI
More than 70% of employees who use generative AI at work report feeling excited about the opportunities it brings. They also believe it will enhance their job satisfaction and make daily tasks easier.
Furthermore, a similar proportion is eager to gain new AI-related skills. These workers see generative AI as essential for staying competitive in their careers.
Stacey Winters, Deloitte’s GenAI market lead for Europe, echoed this sentiment. She said, “Businesses should encourage usage, but in a safe and secure environment. GenAI deployments must include clear guardrails and comprehensive training programs.”
The Risks of Shadow AI
Despite its appeal, shadow AI carries substantial risk. A Veritas study found that two in five UK office workers have entered sensitive data, including customer, financial, or sales information, into public AI tools. Shockingly, 60% of these individuals were unaware that doing so could violate data privacy laws.
A separate survey by WalkMe uncovered that nearly 40% of UK councils allow employees to use AI without a responsible use policy. This lack of regulation not only endangers privacy but also exposes organizations to reputational and legal damage.
The Path Forward: Safe, Ethical AI Adoption
To mitigate these risks, Deloitte advises companies to:
- Develop a clear generative AI strategy
- Conduct regular audits
- Prioritize ethical guidelines
- Engage employees in training and responsible use policies
Companies that act now will not only avoid the pitfalls of shadow AI but also unlock the full potential of generative tools in a safe, compliant, and productive manner.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.