Beneath the surface of GenAI’s outputs lies a massive, mostly unregulated engine powered by data – your data. And whether it’s through innocent prompts or habitual oversharing, users are feeding these machines with information that, in the wrong hands, becomes a security time bomb.
A recent Harmonic report (https://apo-opa.co/3Sw1K4N) found that 8.5% of employee prompts to generative AI tools like ChatGPT and Copilot included sensitive data – most notably customer billing and authentication information – raising serious security, compliance, and privacy risks.
Since ChatGPT’s 2022 debut, generative AI has exploded in popularity and value – surpassing $25 billion in 2024 (https://apo-opa.co/3Z7wOf2) – but its rapid rise brings risks many users and organisations still overlook.
“One of the privacy risks when using AI platforms is unintentional data leakage,” warns Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 Africa. “Many people don’t realise just how much sensitive information they’re inputting.”
Your data is the new prompt
It’s not just names or email addresses that get hoovered up. When an employee asks a GenAI assistant to “rewrite this proposal for client X” or “suggest improvements to our internal performance plan,” they may be sharing proprietary data, customer records, or even internal forecasts. If done via platforms with vague privacy policies or poor security controls, that data may be stored, processed, or – worst-case scenario – exposed.
And the risk doesn’t end there. “Because GenAI feels casual and friendly, people let their guard down,” says Collard. “They might reveal far more than they would in a traditional work setting – interests, frustrations, company tools, even team dynamics.”
In aggregate, these seemingly benign details can be stitched into detailed profiles by cybercriminals or data brokers – fuelling targeted phishing, identity theft, and sophisticated social engineering.
A surge of niche platforms, a bunch of new risks
Adding fuel to the fire is the rapid proliferation of niche AI platforms. Tools for generating product mock-ups, social posts, songs, resumes, or legalese are sprouting up at speed – many of them developed by small teams using open-source foundation models. While these platforms may be brilliant at what they do, they may not offer the hardened security architecture of enterprise-grade tools. “Smaller apps are less likely to have been tested for edge-case privacy violations or undergone rigorous penetration tests and security audits,” says Collard. “And many have opaque or permissive data usage policies.”
Even if an app’s creators have no malicious intent, weak oversight can lead to major leaks. Collard warns that user data could end up in:
● Third-party data broker databases
● AI training sets without consent
● Cybercriminal marketplaces following a breach
In some cases, the apps might themselves be fronts for data-harvesting operations.
From individual oversights to corporate exposure
The consequences of oversharing aren’t limited to the person typing the prompt. “When employees feed confidential information into public GenAI tools, they can inadvertently expose their entire company,” (https://apo-opa.co/3Hked9o) explains Collard. “That includes client data, internal operations, product strategies – things that competitors, attackers, or regulators would care deeply about.”
While unauthorised shadow AI remains a major concern, the rise of semi-shadow AI – paid tools adopted by business units without IT oversight – is increasingly risky, with free-tier generative AI apps like ChatGPT responsible for 54% of sensitive data leaks due to permissive licensing and lack of controls, according to the Harmonic report.
So, what’s the solution?
Responsible adoption starts with understanding the risk – and reining in the hype. “Businesses must train their employees on which tools are ok to use, and what’s safe to input and what isn’t,” says Collard. “And they should implement real safeguards – not just policies on paper.
“Cyber hygiene now includes AI hygiene.”
“This should include restricting access to generative AI tools without oversight or only allowing those approved by the company.”
“Organisations need to adopt a privacy-by-design approach (https://apo-opa.co/3Ze1hbj) when it comes to AI adoption,” she says. “This includes only using AI platforms with enterprise-level data controls and deploying browser extensions that detect and block sensitive data from being entered.”
As a further safeguard, she believes internal compliance programmes should align AI use with both data protection laws and ethical standards. “I would strongly recommend companies adopt ISO/IEC 42001 (https://apo-opa.co/3HmoD8l), an international standard that specifies requirements for establishing, implementing, maintaining and continually improving an Artificial Intelligence Management System (AIMS),” she urges.
Ultimately, by balancing productivity gains with the need for data privacy and maintaining customer trust, companies can succeed in adopting AI responsibly.
As businesses race to adopt these tools to drive productivity, that balance – between ‘wow’ and ‘whoa’ – has never been more crucial.
Distributed by APO Group on behalf of KnowBe4.