Jul 21, 2025
CISO's Top 2025 Concern: Securing AI Agents
Jason Rebholz

In just the past four weeks, I’ve anecdotally noticed an increase in discussion on AI agents. This week it was topped off with OpenAI releasing its OpenAI agent, your very own personal assistant who can take actions on your behalf. Pretty awesome.
A quick search of Google Trends on the topic of Agentic AI showed me that interest started to build in April 2025. While my initial thesis that the topic has been spiking in the last four weeks was my own reality (always good to know), I can at least confirm that interest is actually growing.

Google Trends Analysis - Agentic AI
Given that, it’s no surprise that CISOs are starting to worry about how to secure AI agents. It’s a theme that has been popping up more frequently in my conversations with security and engineering leaders. Having needed the gut check from Google Trends on the AI agentic topic, it was serendipitous that Team8 dropped their CISO Village Survey 2025 report, which “draws lessons from over 100 of the most influential Chief Information Security Officers today.”
What was the #1 identified pain point for 2025? Securing AI agents. I’m sure that, given the deductive reader you are, you already figured that out. I award you five points.

2025 Top CISO Pain Points - Team8 Report
AI adoption is in a pickle race. The report found that among the top CISOs, 67% stated that their enterprises will deploy agents in 2025, with 23% planning to follow suit in 2026. Only 9% of CISOs said that their organization had no plan to introduce agents…whether that’s good or bad has yet to be seen.
What does this have to do with pickles? Everything.
When I worked at McDonald’s as a kid, we used to have pickle races (productivity be damned). What was a pickle race, you ask? Each employee would take a dill pickle chip and throw it against a stainless fridge. The first challenge was getting the pickle to stick. That was up to the human. The second challenge was where things got exciting. That little pickle chip used gravity to race to the bottom, trying its best to stick to the clean surface. The fastest pickle won. And before you ask, yes, we cleaned up afterwards. We weren’t savages.
What does this have to do with AI agents? Companies testing AI agents today are the humans, and the AI agents are the pickles. We’re literally throwing crap pickles against the wall to see what sticks, hoping to find the things that give a return on investment.

With any new technology, CISOs should be concerned about securing it. One of my favorite quotes in the report states that, unlike chatbots, “the danger is not what the agent says - it is what it does.” Agents are useful when you give them autonomy. And that’s where the risks and impacts come in.
Think of agents in two flavors: third-party agents and custom-built in-house agents.
67% of the CISOs said their enterprises are building agents in-house due to the deep customization needed to achieve productivity gains. Of course, this isn’t a “one or the other” approach here. 59% of CISOs reported adopting pre-packaged SaaS agents.

2025 Top CISO Pain Points - Team8 Report
Build fast, build intentionally. We’re still in the opening steps of the AI pickle race, but companies can take steps today to encourage rapid prototyping while balancing future security concerns. Here are just a few high-level things you can do:
Operate agents in a sandboxed environment
Keep agent tasks narrow (also shown to improve agent performance)
Limit permissions to data and tooling
Build in human-in-the-loop (HITL) guardrails
I mentioned third-party agents. Let’s talk about securing employee AI usage and the super scary “shadow AI” gasp. It was the CISOs’ second-highest noted concern. In my discussions with CISOs, and confirmed in the Team 8 report, there are two camps.
Block everything except approved AI tools, or create an AI policy and then pretend you enforce it.
Not the greatest options…
Team8 found that 48% of the CISOs’ enterprises took a restrictive approach, only allowing approved tools. Meanwhile, 30% allowed AI usage with little to no monitoring.

2025 Top CISO Pain Points - Team8 Report
My take on the current risk and impacts of third-party AI tools? It’s fairly minimal. Sure, an employee may upload some of your data to the vendor, and that vendor might even use that data to train their AI models. Oh no! What are the practical impacts outside of contractual obligations? Not much.
But that doesn’t mean you don’t do anything.
The best balance I see today is to update your third-party risk management (TPRM) processes to evaluate third-party AI tools based on the data they will have access to and how employees will use them. The fewer companies collecting your data, the better, but if the tool’s usage helps the company, you can overlook the security risks in most cases. That’s what the TPRM review is for.
A few best practices to narrow in on:
Keep a detailed inventory of AI apps used in your environment
Limit access (both identity and data) to only what is necessary for the AI app
For critical applications and when the risk demands it, monitor for sensitive or regulated data going to the third-party AI app.
Yeah, not every AI risk, and in fact most today, aren’t non-starters.
I have a special place saved for those non-starter security risks with AI agents…
