NATIONAL HARBOR, Md. — Artificial intelligence is poised to transform the work of security operations centers, but experts say humans will always need to be involved in managing companies’ responses to cybersecurity incidents — as well as policing the autonomous systems that increasingly assist them.
AI agents can automate many repetitive and complex SOC tasks, but for the foreseeable future, they will have significant limitations, including an inability to replicate unique human knowledge or understand bespoke network configurations, according to experts who presented here at the Gartner Security and Risk Management Summit.
The promise of AI dominated this year’s Gartner conference, where experts shared how the technology could make cyber defenders’ jobs much easier, even if it has a long way to go before it can replace experienced humans in a SOC.
“As the speed, the sophistication, [and] the scale of the attacks [go] up, we can use agentic AI to help us tackle those challenges,” Hammad Rajjoub, director of technical product marketing at Microsoft, said during his presentation. “What’s better to defend at machine speed than AI itself?”
A silent partner
AI can already help SOC staffers with several important tasks, according to security experts in their presentations here. Pete Shoard, a vice president analyst at Gartner, said AI can help people locate information by automating complex search queries, write code without “having to learn the language” and summarize incident reports for non-technical executives.
But automating these activities carries risks if it’s mishandled, Shoard said. SOCs should review AI-written code with the same “robust testing processes” applied to human-written code, he said, and employees must review AI summaries so they don’t “end up sending nonsense up the chain” to “somebody who's going to make a decision” based on it.
In the future, AI might even be able to automate the investigation and remediation of intrusions.
Most AI SOC startups currently focus on using AI to analyze alerts “and reduce the cognitive burden” on humans,” said Anton Chuvakin, a senior staff security consultant in the Office of the CISO at Google Cloud. “This is very worthwhile,” he added, but “it's also a very narrow take on the problem.” In the far future, he said, “I still want the machines to remediate, resolve certain issues.”
Some IT professionals might “freak out” about the prospect of letting AI loose on their painstakingly customized computer systems, Chuvakin said, but they should prepare for a future that looks something like that.
“Imagine a future where you have an agent that's working on your behalf, and it's able to protect and defend even before an attack becomes possible in your environment,” Microsoft’s Rajjoub said during his agentic AI presentation.
Rajjoub predicted that within six months, AI agents will be able to reason on their own and automatically deploy various tools on a network to achieve their human operators’ specified goals. Within a year and a half, he said, these agents will be able to improve and modify themselves in pursuit of those goals. And within two years, he predicted, agents will be able to modify the specific instructions they’ve been given in order to achieve the broader goals they’ve been assigned.
“It's not two, three, four, five, six years from now,” he said. “We're literally talking about weeks and months.”
Limitations and risks
But as AI agents take on more tasks, monitoring them will become more complicated.
“Do we really think our employees can keep up with the pace of how agents are being built?” said Dennis Xu, a research vice president at Gartner. “It’s likely that we are never going to be able to catch up.”
He proposed a bold solution: “We need to use agents to monitor agents. But that’s further out on the time horizon.”
Many analysts urged caution in deploying AI in the SOC. Chuvakin described several categories of tasks, some “plausible but risky” and others that he would “flat-out refuse” to believe AI could accomplish in the near to medium-term future.
In the risky category, Chuvakin listed autonomous tasks like patching legacy systems, responding to intrusions and attesting to regulatory compliance. “I've seen people who use consumer-grade ChatGPT to fill [out] compliance questionnaires,” he said. “I wish them all the luck in the world.”
Tasks that Chuvakin said he can’t imagine AI accomplishing anytime soon include strategic risk analysis, crisis communications and threat hunting against top-tier nation-state adversaries. Fighting advanced hacker groups “is a human task,” he said, “because ultimately, as of today, humans still outsmart machines.”
Gartner’s Shoard noted that using AI to create tabletop exercises could make staffers overly dependent on AI to warn them about evolving threats, while using AI to create threat detection queries might diminish employees’ investigative skills. “You're going to end up with underdeveloped staff,” he said, “staff that over-depend on things like AI.”
Preserving ‘tribal knowledge’
AI will never replace humans in a SOC, multiple experts said, because human judgment is an essential part of analyzing and responding to security incidents.
“A lot of things we do in a real SOC … involve things that are tribal knowledge,” Chuvakin said, referring to practices that aren’t formally documented. AI will struggle to perform these activities — Chuvakin said he’s seen a lot of models recommend actions that make no sense for the specific networks in which they’re operating. In particular, he said, AI still can’t write threat-detection rules tailored to highly customized legacy IT environments “because of all the peculiarities” in how they’re set up.
Chuvakin urged companies on the receiving end of startups’ “AI-SOC-in-a-box” pitches to “ask them about how this magic would [address] things that are in human heads.”
AI also can augment SOC analysts’ skills and capabilities. Shoard called it “a massive force multiplier” for a SOC workforce, but he warned companies not to rely too much on it.
“If you think you can sack your SOC staff just because you've suddenly bought an AI function, I think you're going to be soundly disappointed,” Shoard said. “AI won't replace your security staff, so use it to enhance them [and] make them better in their jobs.”
In AI we (need to) trust
In the SOC of the future, humans won’t just work alongside AI agents, experts said. They’ll also need to monitor those agents.
“We don't want complete autonomy,” said TIAA CISO Upendra Mardikar. “We have to have a human in the loop.”
Those humans will need to ensure that AI agents’ actions are auditable and controlled by company policies, experts said. Jose Veitia, director of information security at Red Ventures, said businesses should “make sure all the actions are validated.”
Designing an automated system requires feeding it the right data. “If we allow a machine simply to make the decisions for us,” Gartner’s Shoard said, “then we've got to trust that it has all of the relevant information to make that decision effectively.”
Trust and verification were common themes in AI discussions throughout the Gartner conference this week.
“Trust has to be the fabric on which these agents are built,” Rajjoub said. “The more prevalent and capable the agents become, the more critical their security becomes for all of us.”
But as AI agents become more capable, their value in the SOC could increase significantly.
“Unfortunately, AI isn't magic. I don't think it ever will be,” Shoard said. “But it is going to improve things for us in the SOC. You should consider it with great care, but consider it experimentally and use it.”