Leading companies are rapidly developing AI applications by combining enterprise data with large language models (LLMs). However, this explosion in AI development and adoption is also introducing critical security risks like prompt injection, excessive agency, and data leakage.
To shed light on these challenges, Pangea’s CTO and co-founder, Sourabh Satish, spoke with Julie Tsai, former CISO at Roblox, and they discussed the latest trends and best practices for building secure AI applications.
The State of Enterprise AI Adoption
AI is taking the business world by storm. A recent Bain survey found that nearly 50% of companies are using AI for product differentiation, even higher for some internal productivity use cases.
"Being in Silicon Valley, it's almost impossible to escape having a conversation where General AI doesn't come up," noted Sourabh. "Everybody is living and breathing generative AI these days."
Julie added that while there's still room for the technology to mature, "overall AI adoption will be trending up, but I think that we'll see some pullbacks here and there."
Some of the most popular enterprise AI use cases they’re seeing include:
Chatbots for customer service
Personalized recommendations
Code generation and testing
Research and data analysis
Architecting Secure AI-Powered Software
Under the hood, enterprises are using a few key software architectures to build AI applications:
Retrieval-augmented generation (RAG) - most common, easiest to implement
Agentic frameworks - growing in popularity, enables apps to take actions
Fine-tuning existing LLMs
Building custom LLMs
According to an internal Pangea survey of customers, nearly 2/3 of companies already have AI-powered apps in production. Most companies are taking a "build" approach for IP-sensitive applications that involve proprietary data and algorithms. However, companies often opt to "buy" off-the-shelf AI solutions for more general use cases like chatbots or sentiment analysis.
For security teams looking to collaborate with engineering on these projects, Tsai recommends proactively reaching out. "AI is interesting and exciting, and it's a chance to get the security team as well as the engineers on the same page, pushing towards something new."
Top AI Application Security Risks
With the rapid rise of AI comes new security threats. Pangea's survey found that data leakage, hallucination, and prompt injection were the top concerns.
Tsai noted that while the technology is new, many issues map to classic security principles. She adds that attacks like data exfiltration, and injection attacks are examples of how classic security vulnerabilities with apps are also seen in AI apps but with a new shape and permutation.
The OWASP Top 10 for LLMs provides a helpful framework for understanding AI risks. It includes issues such as prompt injection, excessive agency, and sensitive data disclosure. Another useful resource that looks at adversarial threats to AI systems more broadly is the MITRE ATLAS matrix.
AI Security Best Practices and Controls
So how can organizations mitigate these emerging AI risks? To effectively manage security risks in AI applications, it's important to categorize the threats and implement appropriate controls. Pangea’s VP of Marketing, John Gamble noted thekey areas of concern map into five AI product security challenges:
Satish noted that these AI security controls have a role to play both at the network-level and at the application-level. "I think they both have their own place and they both can be applied to dramatically improve the security posture," he added.
For security teams looking to level up their AI knowledge, Tsai recommends dedicating focused time and resources. "When you're dealing with a new landscape that's being deployed very quickly, you have to give yourself room to address that," she explained.
Get the Full Scoop
The rapid rise of enterprise AI brings both immense opportunities and new security challenges. As organizations race to harness the power of large language models and agentic architectures, it's crucial to prioritize security at every stage of the AI lifecycle.
By proactively collaborating with engineering teams, dedicating resources to understanding emerging AI risks, and implementing layered security controls, CISOs and security leaders can help their organizations safely navigate this new frontier.
To learn more about AI security, watch the full webinar recording: