Skip to main content

Securing your AI app

Secure your application at inference time

User interactions with your generative AI application can pose significant risks and liabilities to your organization. Skilled attackers may manipulate the conversation context in harmful ways, while well-meaning users might unknowingly input sensitive data. LLM responses can contain sensitive or harmful content due to training on insufficiently sanitized data, model overfitting, data poisoning, or poorly controlled access to information in retrieval-augmented generation (RAG) systems. The safeguards within the model and the system’s instructions may not always withstand the continuous arms race between prompt manipulation attacks and their preventive measures.

Due to the non-deterministic nature of LLMs, eliminating unexpected behavior is challenging. However, you can control the data that enters and leaves your system at inference time, thereby enhancing security and compliance. Prompt Guardrails in Python with LangChain and Pangea describes how you can leverage the composable nature of Pangea's security services to monitor and control the exposure of sensitive or harmful content effectively.

Was this article helpful?

Contact us