Jul 21, 2025
MCP Security: A Fundamental Architecture Issue
Jason Rebholz

What happens when AI is too helpful? Like exposing all of your private data when you don’t want it to? Super helpful, thanks, AI. Invariant Labs tested this notion against GitHub MCP exploitation and wrote about it here. They claim to have found a “fundamental architectural issue” with MCPs.
A MCP what what, you say? Let’s take a step back and see if they’re right.
MCP = Model Context Protocol. In short, MCP allows you to write a prompt and easily interact with external tools and data sources. So, if you want to search all of your Google Drive folders to find and rank your beloved cat memes, MCP makes that a breeze.
You run an MCP client, which connects to MCP servers that act as intermediaries between you and the service you want to connect to, such as Google Drive. Norah Sakal has a great image to visualize it.

The problem is that MCPs are too nice. They trust everything, and, shocker, security isn’t built-in. That’s where Invariant Lab’s research comes in on GitHub’s MCP server. For those unfamiliar, GitHub is a code repository that serves as the backbone of code development. So if something gains control over your code repo, your product is going to have a bad day.
What happened? Let’s start with a picture and then break down the steps.

To start, an “attacker” posts an issue on a public Github project (see image below). It includes very polite instructions to provide additional information about the author’s other repositories and post them to a publicly viewable file. This is called indirect prompt injection.

Indirect prompt injection via GitHub issue
The developer asks his coding copilot to address any new issues that were submitted. The copilot analyzes that instruction and, using the GitHub MCP server, gets to work on solving the issue of the developer not being well known. It solves this by collecting the user’s full name, bio, and information on private code repos. It then publishes that to a publicly readable file. Because why ask for permission when you’re just being helpful?

How bad is this? It sounds a lot worse than it is. It’s the same security💩. We’re just looking at it from a different view now. The real security issue is how GitHub was configured and how the user configured the MCP server. Both of which, surprise, surprise, were not done securely.
When the MCP server was configured, they likely just gave it the user’s access token, which would have had access to all of the user’s repos. This is bad. Bad. Bad. Bad. You’re just granting the agent access to everything.
Enter least privilege. Ah yes, our old friend that we all love but always forget about because they live just far enough away to make it inconvenient to visit them.
For any AI agent setup, only give it the minimum access it needs to do its job. In this case, don’t give the MCP server access to ALL the code repos. Separate them out per project or at a minimum, the public one. While it creates more agents, it allows for more granular control over the agents.
So the “fundamental architectural issue” here is poor security configurations…which is the same security issue with literally everything. With AI agents, it’s just one more avenue to find a misconfiguration.
It’s the equivalent of driving a convertible with the top down in a rainstorm and then rolling the driver’s side window down. You already made a poor decision by driving a convertible in the rain. You just made it even worse by also rolling your window down.