Employees risk data security using AI tools – TELUS Digital survey

VANCOUVER, CANADA — A recent survey by TELUS Digital reveals that despite company policies, most enterprise employees are entering sensitive information into public artificial intelligence (AI) assistants, creating security risks.
The survey found that 57% of employees admit to inputting high-risk information into tools like ChatGPT, Microsoft Copilot, and Google Gemini.
The TELUS Digital’s AI at Work survey was conducted in January and gathered insights from 1,000 adults aged 18+ who live in the United States.
Shadow AI creates security blind spots
Nearly 68% of enterprise employees access these AI tools through personal accounts, fueling the rise of “shadow AI” that obscures risks from IT and security managers.
The types of sensitive information being shared include personal data (31%), product or project details (29%), customer information (21%), and even confidential company financial information (11%). This occurs despite 29% of employees acknowledging their companies have policies prohibiting such practices.
The survey also revealed that only 24% of companies require mandatory AI assistant training, while 44% of employees report their company either lacks AI guidelines or they’re unaware of existing policies.
Productivity benefits drive continued use
Despite the risks, employees continue using AI tools because of productivity benefits. The survey found that 60% of employees say AI helps them work faster, while 57% report it makes their job easier. As a result, 84% want to continue using AI assistants at work.
“Generative AI is proving to be a productivity superpower for hundreds of business tasks,” said Bret Kinsella, General Manager, Fuel iX at TELUS Digital.
“If their company doesn’t provide AI tools, they’ll bring their own, which is problematic. Organizations are blind to the risks of shadow AI, even while they are secretly benefitting from productivity gains.”