AI Product Security Newsletter - Vol 2. Issue 1

Romana Vasyleha
Romana Vasyleha
Jan 22, 2025

Report: Gartner® 2024 Emerging Tech - Secure Generative Communication for LLMs and AI Agents

Building and deploying AI-driven applications comes with risks for engineering and security leaders.

“Deploying security services across multiple points in the AI data pipeline is essential to mitigate risks such as data leakage and poisoning.”

We think the Gartner® report, "Emerging Tech: Secure Generative Communication for LLMs and AI Agents," highlights solutions for leaders to:

  • Secure generative workflows at scale from attacks like prompt injection.

  • Identify and redact sensitive information with precision.

  • Meet compliance requirements while driving innovation.

Discover strategies to secure your systems, build trust, and deliver AI-powered solutions at scale.

Access


Workshop: Learn to Build Secure RAG Apps

Is your team ready to tackle the OWASP Top 10 AI vulnerabilities?

Securing AI apps is challenging—especially with risks like prompt injection and vulnerable RAG pipelines. Join us on 1/23 at 9 AM PT for a hands-on workshop led by Pranav Shikarpur, Developer Advocate at Pangea, where you’ll:

  • Build a secure RAG chat app that safely handles PII.

  • Protect RAG pipelines against OWASP Top 10 threats.

  • Implement robust identity and access controls for your apps.

  • Learn AI security best practices.

Apply


Article: How Does AI Security Relate to Network Security?

AI security is mirroring the evolution of network security—from basic firewalls to advanced layered defenses. Just as networks rely on trust frameworks and continuous monitoring, AI systems need visibility, anomaly detection, and robust controls to address threats like model poisoning and unauthorized access.

The key lesson from network security? A layered approach. For AI, this means enforcing authentication, running ongoing anomaly detection, and applying zero trust principles. Proven strategies like these can safeguard AI systems against evolving risks.

Explore


AI agents are reshaping the way we interact with technology, blending autonomy with advanced reasoning to tackle complex tasks. As highlighted in Google’s recent white paper, agents are defined as applications that “achieve a goal by observing the world and acting upon it using tools at their disposal.” Unlike standalone AI models, agents can plan, execute, and adapt autonomously—whether booking travel or streamlining enterprise workflows.

Read


Article: Enforcing Authorization for RAG Data

RAG workflows are the cornerstone to most AI applications, but they also present unique security risks. Traditional authorization frameworks like Role-Based Access Control (RBAC) often fall short in addressing the dynamic and complex needs of RAG workflows.

Our latest article (written by Jim Hoagland) outlines strategies for implementing scalable and context-aware access control specifically for AI apps.

This article provides concrete examples on:

  • How and where to enforce authorization checks in RAG pipelines.

  • Methods for enforcing access control to app-stored data.

  • Matching source-side authorization during app storage data retrieval

Learn


Product Update: Pangea Multipass

The single biggest risk of Large Language Models is leaking data to unauthorized users. RAG-based apps may source documents from a multitude of file, object, and document sources each with varying access and permission models. With Pangea Multipass, you can check whether the user issuing the prompt has access to any file, ticket, or page directly with its original source before you include it in an LLM response. Multipass is a python-based open source library that already supports Google Workspace, Confluence, Slack, Github, and more. As always, pull requests are welcome to add your own data sources!

Details


Stay tuned for more updates in the next edition of the AI Product Security Newsletter!

Get updates in your inbox and subscribe to our newsletter

background landmass

We were recognized by Gartner®!

Pangea is a Sample Vendor for Composable Security APIs in the 2024 App Sec Hype Cycle report