Do you remember when and where you first used an AI-powered product that caused you to question reality? I do. In the spring of 2022, I tested a synthetic text-to-voice app that was so uncannily realistic that I briefly questioned if it was truly AI behind the curtain. I used the app to record a quick product demo and asked everyone on my team to guess the individual behind the voice. Not a single person guessed AI.
My first thought: incredible technology! My second thought, having spent years in cybersecurity, was that this will substantially change the social engineering game. And it has. AI technology advancements in the two years since have been incredible to witness and today it seems every company is building AI-powered software like chatbots and productivity tools, spanning every industry from banking to entertainment.
Security Perils in the AI Software Goldrush
Since public LLM/LMM models are trained predominantly, if not wholly, on the publicly-reachable Internet and its data, companies embedding LLM/LMM tech into their own software need to go a step further to drive personalized interactions for their customers and employees by fusing the AI technology with internal, proprietary datasets via model training or response-specific enrichments.
While a public LLM doesn’t know your Social Security Number, it can, if connected carelessly to your HR system for an employee chatbot, cheerfully recall your Social Security Number to you and your nosy coworker. This, understandably, keeps CISOs up at night.
There are many architectural paths to fuse enterprise data with AI models, like agents, RAG, fine-tuning, small language models (SLM), and domain-specific language models (DLM). All paths, however, lead to serious security concerns that leave organizations exposed to significant risks and potential attacks if unaddressed. Let’s consider the Retrieval Augment Generation) RAG architecture briefly:
The RAG architecture prefetches contextual data (e.g. HR data), transforms it into vectors if unstructured and stores it in a local vector database, with retrieved structured data stored directly in a separate local database.
Important security questions that should be asked in this RAG AI architecture include:
Does the retrieved data contain sensitive PII or corporate IP?
Does the retrieved data come from poisoned sources and contain malicious URLs, domains, or files?
Do you have an audit trail of the retrieved data and the context-enriched responses to the prompt?
Does the user who made the prompt have authorization to view the context being fetched from the internal databases to enrich the response?
These questions become even more challenging in agentic AI architectures where dozens, hundreds, or even thousands of individual AI agents, built to retrieve context and take action, can now execute commands or APIs, forming a potential horde of troublesome ghosts in the machine if things go haywire…
Relying solely on developers, who are largely uninterested and untrained in cybersecurity, to self-engineer the software guardrails required to mitigate risk and defend against threats in the fast-moving world of AI software products is a recipe for disaster.
We need instead to empower developers with a paved road of pre-built, trusted security capabilities to accelerate AI software development. Just as developers rely on composable, API-driven frameworks in other domains to speed software development like payments code (e.g. Stripe) and communications code (e.g. Twilio), it’s time to make security code composable to keep pace with the AI software security challenges of today and tomorrow.
Composable Security: A Paved Path for Developers
Composability in software architecture has proven to accelerate software development in adjacent domains and reduce overhead via SaaS configuration control planes that do not require hard code changes to modify in-app functionality.
But what about application security features?
Historically, developers have built core security features like authorization and audit logging from scratch and/or implemented a patchwork of open source and commercial point solutions to achieve specific security functions like key management. But security is finally catching up to the composability movement and Gartner recently recognized Composable Security APIs as a promising new technology in their July 2024 Hype Cycle for Application Security:
“Composable security APIs are security capabilities such as privacy vaults, authentication services, encryption services and digital signing services that are provided by vendors and typically accessed via APIs. Developers can use composable security APIs to embed security capabilities in their applications. The security services can also be called from within popular developer platforms.”
Composability gives developers a “paved road” to accelerate software delivery with pre-built, trusted components and guardrails that they can easily embed in their applications with just a few lines of code. The Netflix engineering and security teams pioneered the “paved road” concept for application security and saw significant software acceleration gains as a result of this transformation:
“For a typical paved road application with no unusual security complications, a team could go from “git init” to a production-ready, fully authenticated, internet accessible application in a little less than 10 minutes. The automation of the infrastructure setup, combined with reducing risk enough to streamline security review saves developers days, if not weeks, on each application.”
With the pressure on to harness AI in software products, developers need a modern, composable security framework that can keep pace with fast development cycles.
The Case for Composable Security to Manage AI Risk
Speed of software delivery is not the only benefit of composable security that makes it well suited to address the risks of AI-powered applications. Consider also:
Fine Grained Data Access Controls: Composable security APIs provide granular control over data access in AI architectures through services like authentication, and authorization based on roles (RBAC), relationships (ReBAC), and attributes (ABAC). This ensures that only authorized users, services, or AI agents have access to sensitive data, minimizing the risk of unauthorized data access or leakage.
Adaptive Risk Capabilities: Composable security architectures enable both automated and manual, real-time adjustments to security policies based on the evolving AI threat landscape. With a central control plane for configuration, security teams can change AI access controls to corporate datasets on the fly or configure risk-based authorizations decisions automatically, such as limiting access if users connect via VPN to submit prompts or if prompt input contains malicious domains or files.
Repeatable security design patterns: Composable security APIs are modular and can be assembled as a repeatable security tech stack to solve challenges common across AI architectures, such as the need to sanitize data inputs and outputs of sensitive PII, corporate IP, and malware. When new AI architectures inevitably arise these composable design patterns can be cloned and quickly deployed to mitigate risk.
Scalability: Composable security APIs are highly scalable, with SaaS and private cloud delivered functionality that can elastically scale up as needed to address bursty or high volume workloads like hundreds of thousands of prompts being submitted to an application.
Composable Security APIs provide a flexible, scalable, and efficient way to embed critical security features such as audit logging, access control, and malware detection into AI-powered applications. These APIs help mitigate threats like data leakage, excessive agency, malware distribution, and weak data access controls by offering pre-built, easily integrated solutions that are crucial for securing modern AI architectures.
To find out how Pangea can help secure your AI-powered software and mitigate risks in AI architectures like agents, RAG, fine tuning, and micro LLMs, please contact us at: https://pangea.cloud/contact-us/
Use Cases
Case Studies
Services
Developers
636 Ramona St, Palo Alto, CA 94301
PrivacyTerms of UseYour Privacy ChoicesContact usPangea is a Sample Vendor for Composable Security APIs in the 2024 App Sec Hype Cycle™ report