Log In

Reset Password

Microsoft warning of rogue AI office use

Issuing a warning: Taheera Lovell, founder and CEO of The TLC Group said using rogue AI apps in the workplace, can expose companies to data breaches (Photograph supplied)

An increasing number of employees are sneaking their own AI subscriptions through the office back door, and that poses a significant risk to their companies.

According to Microsoft’s 2024 Work Trend Index Annual Report, 78 per cent of AI users are now doing this, known as “bring your own AI”, “shadow AI” or “rogue AI”.

It may include anything from transcription apps, to generative AI used to create pictures, to large language models that can answer questions with 95 per cent accuracy.

“Shadow AI is where people find ways to circumvent the system,” said Bermudian Taheera Lovell, chief executive and founder of The TLC Group, a tech education company based in Britain. “Humans are going to be human.”

In a typical scenario, a time-pressed employee uses an unsanctioned AI programme to read and analyse corporate spreadsheets.

“That information can be used, possibly by the developers of the tool, and you may not know where that information goes after that,” Ms Lovell said.

By using rogue AI programmes, people may be exposing their company to security breaches and privacy regulation issues. A data leak could cause legal, customer and reputational risk to the organisation.

Last July, a Disney employee caused a massive data leak when he downloaded an AI tool from GitHub.

“He thought it would help with productivity,” Ms Lovell said. “Instead, it led to malware that compromised login credentials and exposed over 44 million internal messages, including passport numbers and sensitive customer data.”

She said this was a clear example of how innocent use can spiral into something else.

Rise in interest: the TLC Group cofounder and chief disruption officer Cha’Von Clarke-Joell said difficult conversations need to be had around artificial intelligence (Photograph supplied)

Cha’Von Clarke-Joell, The TLC Group cofounder and chief disruption officer, said most employees are not bringing in their own AI for nefarious reasons, just the opposite.

The former Bermuda assistant privacy commissioner said: “They are doing it thinking that they are able to produce better quality work and outputs so that they can have better opportunities.”

According to the Microsoft report,some employees are doing it because they have not been told otherwise.

Their corporate technology policy lags behind. In other cases, shadow AI users are driven by a feeling that work has accelerated too much.

Ms Lovell said The TLC Group has now embedded discussion around shadow AI use in their workshops.

She described herself as an ‘operational evangelist’.

“It is important to not just throw new policies and procedures at an organisation and walk away,” she said. “You need to give employees the tools and time to build a culture around them. If you don’t give people the tools, time and structure they need, you create additional risk.”

In August, the UK launches the AI Act 2025, the first-ever legal framework addressing AI risk.

The new legislation sets out clear risk-based rules for AI developers and users, regarding specific uses of AI. It also assigns AI applications risk categories such as unacceptable, high-risk and low risk.

Ms Clarke-Joell thought it likely that Bermuda will also attempt to regulate AI in the next decade.

“We are not sure of what is happening in terms of the Government or the steps that they are taking at the moment,” she said. “However, we do see the rise of interest in AI.”

She said there were certain job roles that AI was supporting more than others, and that had some people worried.

“Difficult conversations need to be had to ensure that we are protecting humanity in the long run,” she said.

Correction: this article has been corrected to clarify that a Disney employee downloaded an AI tool from GitHub

Royal Gazette has implemented platform upgrades, requiring users to utilize their Royal Gazette Account Login to comment on Disqus for enhanced security. To create an account, click here.

You must be Registered or to post comment or to vote.

Published March 27, 2025 at 7:58 am (Updated March 27, 2025 at 9:22 am)

Microsoft warning of rogue AI office use

Users agree to adhere to our Online User Conduct for commenting and user who violate the Terms of Service will be banned.