Sep 28, 2025

Your Third-Party AI Risks Are Your Risks

Jason Rebholz

Fun fact. Even if you don’t build your own AI apps or agentic systems, you still have the joy of managing AI-specific risks like prompt injection. In fact, I believe that most organizations are at risk of this today or will be in the near future.

Even if your company isn’t building AI tools, one of your existing third-party vendors almost certainly is, because in today’s market, if a B2B SaaS isn’t building AI into its platform, are they even a real company?

Here are just a few examples that are likely already present in your environment, whether officially or unofficially…I’m betting you have heard of some of these:

  • AI Chatbots: ChatGPT, Claude

  • AI Assistants: Microsoft Copilot, Google Gemini

  • Coding Copilots: Cursor, Windsurf, GitHub Copilot

  • Agents: Salesforce Agentforce

With SaaS apps, your data is now at a greater risk thanks to AI. Call a plumber because here are just a few examples that have emerged over the last few weeks, all focused on data leakage.

ShadowLeak: Radware found that it could use ChatGPT’s Deep Research agent to steal sensitive information from your connected Google email. It starts with an attacker sending you an email containing a prompt injection. Even if the user doesn’t read the email, when they ask ChatGPT Deep Research a question that involves pulling their email (assuming they have connected it), the prompt injection goes to work and, like a good little puppy, retrieves sensitive information from the user’s inbox and ships it off to the attacker.

Source

ForcedLeak: Noma discovered a chain of vulnerabilities in Salesforce Agentforce that could enable an attacker to steal data from the Salesforce CRM. This attack starts with an attacker submitting a specific type of form and including a prompt injection in one of the fields. Agentforce reads the submitted content and treats the prompt injection as an instruction to execute, which results in the data being sent to the attacker.

Source

Notion Leak: CodeIntegrity spared us from a “leak” name but graced us with a finding on Notion’s new agent. They found that you can provide Notion a PDF with, you guessed it, hidden prompt injection, and the Notion agent executes it like a good lil agent does. Those instructions included how to take client data stored in Notion and send it to an attacker-controlled server.

Source

So how do you manage this new risk? Ah, yes, so glad you asked. For organizations using SaaS tools (aka everyone), start here:

Step 1: Identify AI Usage: I’ll spare you the adage of “you can’t protect what you can’t see.” It’s overplayed…but it’s also really important. You need to monitor both the known knowns, i.e., the third-party SaaS solutions that have already undergone your third-party risk management review, and the unknown unknowns, i.e., your Shadow AI. You know your users are signing up for AI tools and connecting them to your company data. What you don’t know is what tools they are.

Step 2: Establish an AI Review Process: If you have a third-party risk management process, great, you’re already halfway there. But you need to update it to include questions around AI. Like, what types of models is the third-party provider using? How are they securing their AI implementations? What risk/security assessments have they done against their AI implementation? How are they monitoring for malicious activity?

Also, be sure to classify these SaaS apps based on what data/tools you feed it or that it has access to. Assume that something bad can come from the SaaS tool and think about what it has access to. You’ll get a pretty good sense of the risk from there.

Step 3: Set AI-Usage Policies: If you don’t have an acceptable use policy, now is the time to create it. Establish the rules of the road for what AI use is allowed and how it should be used. At a minimum, this should require employees to submit tools through the AI review process. You should also ensure that employees have a clear understanding of the type of data that can be used with these tools. It’s a business decision that comes down to what the AI tool will have access to (e.g., data, tools, etc.) and the level of risk you’re willing to tolerate.

Step 4: Monitor Usage: This is the blind spot for most organizations. After you complete the initial security review of a SaaS tool, you feel all warm and fuzzy that you’ve done the right things to validate the security. But guess what, security isn’t static. And like any person trying to find a new partner, that third party probably embellished their security controls. For any high-risk third-party tools, make sure to keep tabs on new AI features they’re adding and how they could impact your security.

Step 5: Educate and Enable: When you find wins for tools that enable teams to work more efficiently, share that with the company. This is an opportunity to share what’s working and ensure it’s also secure along the way.

The bottom line here is that third-party risk management was never good. These SaaS apps were always a risk point for data. But now, their attack surface has just expanded, and you are even more blind to it than you were before. With some intentional steps, you can gather the right level of awareness and manage the risk appropriately.