News

Published August 24, 2025

Businesses put at risk when employees use unauthorized AI tools at work

By Ritika Dubey
AP - ChatGPT landing page I
More workers are engaging with AI tools to help them complete work tasks and increase their productivity. Chat GPT's landing page is seen on a computer screen, Monday, Aug. 4, 2025, in Chicago. (AP Photo/Kiichiro Sato)

An artificial intelligence chatbot could help quickly clean up your presentation moments before an important board meeting. But those quick AI fixes can become a liability for the higher-ups you're trying to impress.

More employees are using AI tools to help them complete work tasks and increase their productivity, but most of the time, those tools aren't approved by companies. When employees make use of unauthorized AI platforms and tools, it's referred to as shadow AI, and it creates a risk that workers could accidentally disclose sensitive internal data on these platforms, making the company susceptible to cyberattacks or intellectual property theft. 

Often, companies are slow in adopting the latest technology, which may push employees to seek third-party solutions, such as AI assistants, said Kareem Sadek, a partner in the consulting practice at KPMG in Canada specializing in tech risk.

Barrie's News Delivered To Your Inbox

Stay up to date with what Barrie's talking about. Get the latest local news delivered right to your inbox every day. Never miss out on what's going on ...
Subscription Form
Consent Info

By submitting this form, you are consenting to receive marketing emails from: Central Ontario Broadcasting, 431 Huronia Rd, Barrie, Ontario, CA, https://www.cobroadcasting.com. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

This so-called shadow AI often seeps in when users are looking for convenience, speed and intuitiveness, Sadek said.

But these unauthorized tools are becoming a headache for Canadian businesses, big and small.

"Companies are struggling to make sure that their intellectual property is maintained and they are not leaking sensitive information about their business practices, about their customers, their user bases," said Robert Falzon, head of engineering at cybersecurity firm Check Point Software Technologies Ltd.

What many AI users don't understand is that whenever they interact with chatbots, their conversations and data are being stored and used to further train those tools, Falzon said.

For example, an employee could share confidential financial statements or proprietary research on unsanctioned chatbots to generate infographics — unaware that the sales numbers are now available to people outside the company. Meanwhile, an outsider could land on that data when researching the same subject on the chatbot, unaware that it wasn't supposed to be publicly accessible.

"There's a chance that the AI might dig back into its resources and training and find that piece of information about your company that talks about the results ... and just nonchalantly provide that to that person," Falzon said.

And hackers are using the same tools just like everyone else, Falzon warned. 

Advertisement

A July report from IBM and U.S.-based cybersecurity research centre Ponemon Institute found 20 per cent of the companies it surveyed said they had suffered a data breach due to security incidents involving shadow AI. That's seven percentage points higher than those that experienced security incidents involving sanctioned AI tools. 

The average cost of a Canadian breach between March 2024 and February 2025 soared 10.4 per cent to $6.98 million from $6.32 million the year before, the report said. 

There's a need to establish governance around AI use at work, KPMG's Sadek said. 

"It's not necessarily the tech that fails you; it is the lack of governance," he said.

That could mean establishing an AI committee with people from across departments such as legal and marketing, to look at tools and encourage adoption with the right guardrails, Sadek said. 

Guardrails should be grounded in an AI framework that aligns with the company's ethics and helps answer tough questions about security, data integrity and bias, among others, he said.

One example could be adopting a zero-trust mindset, Falzon said. That means not trusting any devices or apps that aren't explicitly allowed by the company.

The zero-trust approach reduces risk and limits what a device will or will not allow an employee to submit in a chatbot, he explained. For example, Falzon said employees at Check Point aren't allowed to input research and development data and if they do so, the system will restrict and inform the user of the risks.

"That's going to help make sure that customers are both educated and understand what risks they take, but also at the back end of it, make sure that those risks are mitigated by technology protection," Falzon said.

Advertisement

Creating awareness is key to smoothing the friction between employers and workers about AI tools, experts say.

Sadek said holding hands-on training sessions and educating employees about the risks of using unsanctioned AI tools can help.

"It significantly reduces the use or holds the users or employees accountable," he said. "They feel accountable, especially if they're educated and have awareness sessions of the risks." 

In order to keep data contained within internal systems, some companies have started deploying their own chatbots.

Sadek said it's a smart way to tackle unauthorized AI tools.

"That will help (companies) ensure more security and privacy of their company data, and ensure that they're built within the guardrails that they already have within their organization," he said.

Still, internal tools can't completely eliminate cybersecurity risks. 

Researcher Ali Dehghantanha said it took him just 47 minutes to break into a Fortune 500 company's internal chatbot and access sensitive client information during his cybersecurity audit. The company hired him to evaluate its internal chatbot's safety and check if the system could be manipulated into revealing sensitive data. 

"Because of its nature, it had access to quite a number of company internal documents, as well as access to communication that different partners were conducting," said Dehghantanha, a professor and Canada Research Chair in cybersecurity and threat intelligence at the University of Guelph. 

He said big banks, law firms and supply chain companies are significantly relying on internal chatbots for advice, email responses and internal communications — but many are lacking proper security and testing.

Advertisement

Companies have to set aside a budget when adopting AI technology or deploying their own internal tools, he added.

"Not only for AI, for any technology, always consider the total cost of ownership," Dehghantanha said. "One part of that cost of ownership is how to secure and protect it.

"For the AI at the moment, that cost is significant," he said.

Companies can't stop staff from using AI anymore, Falzon said, so employers need to provide the tools their employees need. 

At the same time, he said, "they want to be sure that things like data leakage don't occur and that they're not creating a greater risk than the benefit that they offer."

This report by The Canadian Press was first published Aug. 24, 2025.

What do you think of this article?
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0
Advertisement
Advertisement
Advertisement

Have a breaking story?

Share it with us!
Share Your Story

What Barrie's talking about!

From breaking news to the best slice of pizza in town! Get everything Barrie’s talking about delivered right to your inbox every day. Don’t worry, we won’t spam you. We promise :)
Subscription Form
Consent Info

By submitting this form, you are consenting to receive marketing emails from: Central Ontario Broadcasting, 431 Huronia Rd, Barrie, Ontario, CA, https://www.cobroadcasting.com. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Related Stories

Advertisement
Advertisement