Dive Brief:
- Unmonitored artificial intelligence tools are making data breaches costlier, according to a new report from IBM.
- One in five organizations surveyed said they’d experienced a cyberattack because of security issues with “shadow AI,” and those attacks cost an average of $670,000 more than breaches at firms with little or no shadow AI, IBM said in its annual Cost of Data Breach report.
- According to the report, while only 13% of organizations reported breaches involving AI tools, 97% of those organizations “lacked proper AI access controls.”
Dive Insight:
As security leaders grapple with how to oversee their companies’ new AI platforms, IBM’s report illustrates the potential consequences of not taking AI security seriously enough.
One of the most important findings relates to the ubiquity of weak authentication controls as a factor in hacks of businesses’ AI platforms. According to IBM, the most common origin point for these attacks was a supply-chain intrusion, with the hackers accessing the AI tool through “compromised apps, APIs or plug-ins.” The finding highlights the importance of basic security protections for AI tools and other business platforms, including zero-trust principles like network segmentation.
After hackers penetrated a company’s AI platform, they frequently compromised other stores of data (as happened in 60% of cases) and occasionally caused operational disruptions to important infrastructure (31% of cases).
Despite clear evidence that strict attention to the security of AI tools can prevent costly breaches, businesses aren’t rushing to implement governance programs. Sixty-three percent of companies that experienced a breach said they didn’t have an AI governance policy, although some were developing such policies. Even companies with policies often had defective ones — IBM found that fewer than half of such organizations “have an approval process for AI deployments,” and 62% of them failed to implement strong access controls on their AI tools.
Only 34% of organizations with AI governance policies regularly check their networks for sanctioned tools, according to the report — a finding that helps explain the prevalence of “shadow AI” often associated with increased breach costs.
Meanwhile, hackers continue to find generative AI valuable for launching attacks. “On average, 16% of data breaches involved attackers using AI, most often for AI-generated phishing (37%) and deepfake impersonation attacks (35%),” IBM said. The company previously reported that generative AI reduced the time needed to write a convincing phishing email from 16 hours to five minutes.
IBM said its report was based on 470 interviews with “individuals at 600 organizations that suffered a data breach between March 2024 and February 2025.”