5 things to avoid asking AI, according to security experts
Security experts are raising red flags about the questions people are asking their AI chatbots and agents

d3sign via Getty Images
Many people are settling into intriguing yet ill-defined relationships with artificial intelligence agents and chatbots.
In doing so, many enthusiasts may not realize the security risks they’re taking by engaging with AI, and this could cost them a lot.
“Absolutely, it’s a risk,” said Steve Cobb, CISO of SecurityScorecard, a cybersecurity ratings firm. “Both data and personal security risks become likely if inappropriate information is shared with an AI bot or agent.”
For example, a consumer provides an AI bot a copy of their financial information and asks it to help prepare tax returns.
“This would potentially expose your personal financial information to the Internet,” Cobb said. “It provides opportunities for threat actors to steal that data for credit fraud, perform sophisticated phishing attacks, and gather other personal information that might give them intelligence that they could use to further exploit you and your family.”
The data shines a light on AI bot engagement risks
C-suite executives are more wary than most consumers about AI agent threats. According to AvePoint’s annual survey of business leaders on the state of AI, more than 75% flagged AI-related security breaches.
The same survey found that most organizations (85.7%) are hitting the brakes on rolling out generative-AI tools due to two main problems: poor data quality and data security concerns.
- Inaccurate AI output due to outdated, irrelevant, or hallucinatory data is the biggest risk they flagged (at 68.7%). “This reinforces the fundamental argument AvePoint made in their 2024 research, which shows that without strong governance and information management disciplines, AI would be hampered from the start,” the study noted.
- Executives' second-greatest concern (at 68.5%) is the unauthorized exposure of sensitive data due to AI, AvePoint reported.
Those figures become even more alarming given how U.S. consumers regularly use AI bots.
Read more: The problem with your new AI doctor
A new study from Fractl shows 29% say AI chatbots have changed how often they talk to real people about emotional issues, with 1 in 5 using them as therapists. Meanwhile, 1 in 3 users have shared personal secrets with their chatbot, and 24% say AI chatbots understand them better than friends or family.
As users grow more comfortable with AI bots, expect those figures to rise, along with more fraud, theft, and other negative outcomes as more AI agent engagements roll in.
5 questions security experts warn not to ask AI
Anyone considering a back-and-forth conversation with an AI chatbot or agent should go into the experience with their eyes wide open and with their guard up, security experts say.
“Anything uploaded to AI should be viewed as being placed in the public domain,” said Tony Anscombe, Chief Security Evangelist at ESET, a cybersecurity software firm. “While this isn’t strictly true, it does leave your control and thus could become public at some time.”
With a zipped lip in mind when it matters, AI defense experts put these queries in a ‘no-go’ zone when talking to an AI bot.
The query: “Given my diagnosis and this full medical report, what treatment should I ask for?”
The problem: Sharing private health data and history with an AI agent is a risky proposition, says Bobby Ford $F, chief strategy and experience officer at Doppel, an AI-native social engineering defense platform. “This question can drive discrimination, predatory targeting, and invasive profiling if tied back to you,” Ford said.
The query: “Here’s our unreleased product roadmap and security architecture; what threats do you see?” or “Here’s a CSV of customer names, emails, and purchases; segment this.”
The problem: “This query effectively exfiltrates trade secrets and regulated data into a third-party system,” Ford noted. That can expose your company’s private secrets and put your own job and career at risk.
The query: “I live at 123 Oak Lane and leave daily at 7:30 am; what’s the safest route?” or “My kids go to Lincoln Elementary at 8:15 am; plan our schedule.”
The problem: Releasing home address information, along with routine and family details, could fall into the hands of bad actors and lead to fraud, theft, or even bodily harm. “This question increases stalking and physical safety risk if that data is ever correlated with your identity,” Ford warned.
The query: “Here’s my info. Can you recommend a great credit card/bank/stockbroker?”
The problem: Normally, you wouldn’t share personal information with a stranger, but an AI chatbot may not be so threatening to people, even consumers who view themselves as tech-savvy.
“Let’s say a user asks an AI to compare credit card interest rates and includes account numbers or login details, “for accuracy,” said Craig Miller, a partner at DEI Equity Partners and a former CIO who’s advised major brands such as Pepsi, Planet Fitness, and Bank of America $BAC. “AI platforms are not designed to securely handle credentials. This can expose users to fraud, account takeover, or identity theft. No legitimate AI use case requires authentication data.”
The query: “I suspect I’ve been targeted by an online fraudster. Here’s the email/text — can you give me an assessment?”
The problem: People, especially professionals, should never upload suspicious files or potential malware to an AI bot for analysis. “You may be handing over a zero-day exploit or sensitive logs to a third party,” said Carl Froggett, CIO at Deep Instinct, an AI-focused security firm. “That work belongs in a local, sandboxed environment.”
Froggett said, once that data leaves your control, it may be stored, reviewed, or exposed in ways you didn’t intend. “The safest assumption is to treat public AI like a conference room with the door propped open,” he noted.
Take direct action against AI bot information threats
To further safeguard your data and yourself from harm, start thinking of any AI tool as a semi-public workspace.
“If you wouldn’t put the information in a shared document or email it to a broad audience, don’t give it to an AI application,” said Numa Dhamani, head of machine learning at iVerify and the co-author of ‘Introduction to Generative AI.’
If you have to engage with an AI bot, Dhamani advises using abstraction instead of real data. “For example, replace real names, numbers, and identifiers with placeholders like 'Company X $TWTR' or '$X per month,” she said. “AI tools can be powerful, but they are not private, infallible, or accountable like regulated professionals. The safest way to use them is to assume that anything you share could persist.”
Adjust your concrete behaviors with AI agents, too.
“For ad hoc personal use, stay signed out or use an AI-only account with minimal identifying details and no linkage to your primary email or social logins,” Ford said. “At work, route usage through an enterprise-managed AI stack with data loss prevention, logging, and retention controls instead of freestyling with public tools.”