Logo

Employees are using AI in harmful ways — and companies may be in the dark

Almost half of professionals surveyed have used AI on the job inappropriately, and 63% say they’ve seen other staffers using it inappropriately


China News Service
 via Getty Images

Artificial intelligence has hit the workforce like an earthquake, and increasingly, companies are dealing with the aftershocks.

One burgeoning issue is the growth of so-called “shadow AI," where workers are using AI in ways they shouldn’t, whether unintentional or not.

According to a recent study by the University of Melbourne and KPMG, 47% of career professionals surveyed have used AI on the job inappropriately, and 63% say they’ve seen other staffers using AI inappropriately. Those cases vary, from using AI to chat on internal company performance assessment tests to feeding sensitive company data into third-party AI tools.

Doing so brings massive risk to companies, the study noted.

“This invisible or shadow AI use doesn’t just exacerbate risks – it also severely hampers an organization’s ability to detect, manage, and mitigate risks," the report noted.

Companies face an AI-driven illusion of competence

Workplace experts say the real shift with AI is not that employees suddenly became dishonest. The shift is that AI takes shortcuts fast, easily, and invisibly.

“Before AI, hiding poor work was harder," said Zahra Timsah, an AI governance leader and CEO of i-GENTIC AI, an agentic AI compliance platform. “Now an employee can generate a polished report in minutes, and managers assume competence. This creates the illusion of productivity.”

For example, Timsah cites an employee who uses AI to generate analysis and presents it confidently, but cannot defend it when questioned. “The company makes decisions based on work nobody truly understands,” Timsah added. “The biggest threat is not cheating on tests. It’s companies quietly losing internal intelligence while believing their teams are thinking independently.”

Other high-profile corporate executives say the data indicates companies are just now seeing the tip of the iceberg on shadow AI usage. Consider the fuller picture from the Melbourne study:

--- 44% of U.S. workers are using AI tools without proper authorization,

--- 46% have uploaded sensitive company information and intellectual property to public AI platforms, and 64% admit to putting less effort into their work because they can lean on AI.

--- More than half, 57%, are making mistakes in their work because of unchecked AI use, and 53% are concealing their AI use entirely, presenting AI-generated content as their own.

“It isn’t just that people are passing off AI as their own work; they're also poisoning the corporate well by relying on AI slop,” said Nick Misner, COO of Cybrary, an Atlanta-based cybersecurity professional development platform. “While AI accelerates the speed at which we can code, it’s introducing more debt and security vulnerabilities into the organization.”

Misner notes this isn’t an isolated trend; instead, it’s a systemic failure of organizational readiness.

“We’re seeing AI adoption massively outpace governance,” he said. “Gallup’s State of the Global Workplace report tells us that 79% of the global workforce is somewhere between 'doing the minimum' and 'actively disengaged.'”

Consequently, when you hand disengaged workers a powerful tool with no meaningful guidance, they’re not going to use it to become more productive. “They’re going to use it to do the same work with less effort or, worse, to cut corners in ways that create real organizational risk,” he noted.

The threat isn’t just about cheating on tests, although in a reported KPMG Australia case out this week, as 28 employees were caught using AI to cheat on internal exams, including a partner fined $10,000 for cheating on an AI ethics exam.

“That example perfectly illustrates the irony,” Misner said. “The bigger threat is that organizations are flying blind. If nearly half your workforce is using AI inappropriately and you don’t even know it, your risk exposure is massive, from data leakage to compliance violations to erosion of the skills your people actually need.”

Taking nefarious AI usage out of the shadows

The C-suite must step up with proposals, policies, and penalties to ensure AI is being used ethically at their companies. These strategies should be at the top of their priority list.

Learn from the past

There’s a good case to be made that the Melbourne/KPMG numbers aren’t unique to AI.

“We saw similar patterns when the internet and search engines first entered the workplace,” said Joe Schaeppi, co-founder of Solsten, an AI-based user engagement company in Minneapolis, Minn. “Whenever a powerful new tool appears, misuse is inevitable; that’s human nature.”

As AI adoption grows, Schaeppi said management will likely see more experimentation and gray-area behavior, but as with all technologies, governance and guardrails evolve. “Companies such as Anthropic are already taking a more enterprise-focused approach, building in rules and constraints to reduce risk as the technology matures,” he noted. “If you’re a company and see inappropriate behavior on any tool, the concern should be placed on culture and how you enforce policies and procedures.”

Lean into human supervision

To keep workplace AI engagements in check, management must task an AI analysis team with reviewing the company’s data access and permissions for any data types that are paramount to the business's future.

“Next, synthetic datasets are nothing new and a great way to still model outcomes while leveraging your data,” he said. “Additionally, I’d always involve a human in the loop before anything goes live. Multiple companies have still found AI reporting numbers wrong. Whether it’s ensuring the messaging is still on brand or appropriate, it’s important to keep a human in the loop.”

Be crystal clear on AI usage rules

Companies should also provide approved internal AI tools and set one clear rule. Never put confidential or regulated information into public AI systems.

“They should also monitor where sensitive data flows, especially copy-paste into AI tools, which is now a major blind spot most companies completely miss,” Timsah said. “Most importantly, companies must change how they evaluate employees. “

Timsah also encourages company leaders to avoid rewarding polished output alone. “Require employees to explain their reasoning and demonstrate understanding,” she said. “AI can generate answers, but it cannot replace ownership or accountability.”

The first policy Timsah’s team implemented at i-GENTIC was simple and clear: employees could use approved AI tools, but they could not input confidential, client, financial, or proprietary information into public AI systems.

“We focused on clarity, not restriction,” she noted. “This created trust because the employee knew AI use was allowed, but with clear boundaries. It also built accountability, because everyone understood what was safe and what was not.”

Inside company training should focus on practical examples, not vague policies that nobody reads. Employees need to understand clearly what is safe and what is not.

“Using AI to rewrite a generic email is fine,” Timsah said. “Uploading client contracts, financial data, or proprietary information into a public AI tool is not. Using AI to brainstorm ideas is fine. Presenting AI-generated analysis you do not understand as your own work is not.”

When to get outside enforcement help

Employee AI misuse becomes a legal issue when there is intent and harm.

“This includes leaking confidential data, stealing intellectual property, manipulating financial information, or committing fraud with AI assistance,” Timsah noted. “At that point, companies may involve investigators, regulators, or law enforcement.”

Company decision makers should also know that most misuse starts as convenience, not malicious intent, but once it causes real harm, financial exposure, or deception, it crosses into legal territory. “The key distinction is whether the misuse resulted in exposure, loss, or intentional concealment,” Timsah added.

Lastly, focus on training

Experts say it’s important to note that organizations should treat AI use like any other high-risk behavior and educate employees on how to use it safely.

Additionally, when educating employees on AI usage, management must make them aware that using AI doesn’t let them off the hook.

“Employees are still responsible for ensuring that the information they upload on AI platforms is accurate and not in violation of any laws,” said Kelsey Szamet, partner at Kingsley Szamet Employment Lawyers. “Employees should also be aware that uploading confidential and proprietary information on AI platforms may result in such information being permanently exposed on the platform.”

From an employment perspective, consistency is key. If one employee is let go for AI misuse and another is not, that creates liability issues of discrimination and retaliation. "The stronger policy and training process in place, the less risk of litigation,” Szamet said.

The larger concern is not that employees will use AI. They will. “The concern is that companies will address this ahead of it becoming a problem,” Szamet added.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.