What is MCP?
In the rapidly evolving landscape of AI technology, a key challenge has been how to connect large language models with all the external data sources and tools that can add context to user prompts. Enter the Model Context Protocol (MCP) - an open standard that simplifies how AI applications connect with external data sources, tools, and systems. Developed by Anthropic, MCP acts as a universal connector, allowing AI models to interact with various tools and data without needing custom integrations.
Before MCP, Agents would have to make API calls or implement the tools or functions themselves in order to get the context data to be sent to the LLMs. Now with MCP, Agents make a call to an MCP server, retrieving a list of the connected tools or functions. This modular approach allows for tools to be dynamically added to the Agents as needed.
For example, let’s say you want your Agent to identify and delete files that have a malicious reputation. Instead of implementing tools in the Agent itself for listing files, checking their reputation, and deleting them, these functionalities can exist in one or more MCP servers. With MCP, these external tools can be accessed by the Agent via the standard protocol. The MCP server can be connected to multiple agents and the agents can connect to multiple MCP servers, thereby increasing the reusability and access of the tools without writing more code.
With this explosion of access to tools and integrations, there is cause for concern about the information and resources being exposed. Let’s dive into how security can fit into this architecture and limit risks.
Securing MCP Servers With Pangea
Pangea has developed an open-source proxy that can be used to wrap existing MCP servers with AI Guard guardrails with no code changes in your agent or MCP servers. Think of our open-source proxy as a wrapper around the MCP server that looks at all agent and tool traffic.
In order to use Pangea’s open-source proxy, you must first configure the Pangea AI Guard Service on the Pangea Console. You can configure and enable different security guardrails without changing a line of code by using the Pangea Console to set your security policies and adjust them as needed.
After you’ve configured your security policies through the Pangea Console, we then need to proxy the tool calls. Pangea’s AI Guard service can inspect and secure tool inputs and outputs.
To set up the Proxy service:
Replace the target MCP command and args with the Pangea MCP Proxy command and args
Move the target MCP command and args into [the Pangea MCP Proxy command’s args after the arg string “--”
Target MCP Command
{
"mcpServers": {
"qrcode": {
"command": "npx",
"args": [
"-y",
"@jwalsh/mcp-server-qrcode"
]
}
}
}
Pangea MCP Proxy Command
{
"mcpServers": {
"qrcode": {
"command": "npx",
"args": [
"-y",
"@pangeacyber/mcp-proxy",
"--",
"npx",
"-y",
"@jwalsh/mcp-server-qrcode"
],
"env": {
"PANGEA_VAULT_TOKEN": "pts_00000000000000000000000000000000",
"PANGEA_VAULT_ITEM_ID": "pvi_00000000000000000000000000000000"
}
}
}
}
This will have the Pangea MCP Proxy proxy any tool calls to and from the target MCP server, providing guardrails that:
Pass only AI Guard sanitized inputs to the underlying tool 2.Returns only AI Guard sanitized outputs from the underlying tool to the caller
Automatically applies AI Guard guardrails to protect against malicious tokens and PII leakage or moderate content based on the Pangea Guardrail settings.
Pangea's MCP proxy offers a frictionless approach to implementing AI security guardrails, allowing organizations to protect their AI systems from prompt injection, malicious content, and confidential information leakage without requiring any changes in the MCP servers themselves.
Pangea’s MCP server
With Pangea’s new open-source MCP server, organizations can directly call Pangea AI security guardrail services that check for malicious prompts or prompt injection attempts as well as redact sensitive information, implement secure audit logging, check IP addresses and domains for malicious reputations, and perform WHOIS / geolocation lookups. Pangea services are configured as tools in the Pangea MCP server, doing the integration and definition work for you already.
All traffic through the MCP server—user prompts from the LLM and outputs from tools and data sources—would be checked by configured guardrails. This gives more coverage and likelihood of catching multi-shot prompt attacks or other forms of prompt attacks.
In the video linked above, Pangea’s AI Guard service on the MCP server was able to identify a prompt injection attack on the Claude Desktop chat app. This demo attempted to reveal the system prompt, but more malicious attacks can be used to expose company confidential information, PII, or even details about the internal system and tool. Watch the demo to see it in action and how to set it up for yourself.
Pangea Vault
Many MCP servers rely on API keys to authenticate with external services. The prevalent method for setting these up has been to hard-code the keys under the ”env” object of the AI application’s MCP configuration. Hard-coding credentials like this makes them harder to rotate later on. Pangea’s Vault service offers a better approach wherein API keys may be securely stored and retrieved at runtime, all while handling automatic rotation policies. Pangea’s MCP server and MCP proxy both make use of this.
Conclusion
MCP is enabling quick access to tools and resources, thus making AI even more powerful. Security is paramount in making sure that bad actors aren’t getting access to that same information.
By adding Pangea’s MCP Proxy around your existing MCP servers to add security guardrails, and with security policies that are easily configured in the Pangea Console, you can dynamically adjust your security posture without any additional code changes.
Pangea's MCP server solution gives access to the Pangea services as tools and is ready for deployment on GitHub, with a sample featuring the Claude Desktop integration.
With Pangea, you gain a comprehensive, robust security layer that protects your AI applications from emerging threats and vulnerabilities. Implementing Pangea’s MCP proxy service ensures that your AI interactions remain secure, compliant, and trustworthy, ultimately fostering a safer AI ecosystem.