Skip to main content

AI Guard Quickstart

This guide walks you through the steps to quickly set up and start using AI Guard, Pangea's service for protecting your AI applications. You'll learn how to sign up for a free Pangea account, enable the AI Guard service, and integrate it into your application.

Get a free Pangea account and enable the AI Guard service

  1. Sign up for a free Pangea account .

  2. After creating your account and first project, skip the wizards. This will take you to the Pangea User Console, where you can enable the service.

  3. Click AI Guard in the left-hand sidebar.

  4. In the service enablement dialogs, click Next, then Done.

    Optionally, in the final dialog, you can make an example request to the service using the Text to guard input and the Send button.

  5. Click Finish to go to the service page in your Pangea User Console.

  6. On the AI Guard Overview page, capture the following Configuration Details by clicking on the corresponding values:

    • Domain - Identifies the cloud provider and is shared across all services in a Pangea project.
    • Default Token - API access token for the service endpoints.

    AI Guard Overview page in the Pangea User Console

    Make these configuration values available to your code. For example, assign them to environment variables:

    .env file
    PANGEA_DOMAIN="aws.us.pangea.cloud"
    PANGEA_AI_GUARD_TOKEN="pts_qbzbij...ajvp3j"

    or

    export PANGEA_DOMAIN="aws.us.pangea.cloud"
    export PANGEA_AI_GUARD_TOKEN="pts_qbzbij...ajvp3j"

Protect your AI app using AI Guard

In the following examples, AI Guard removes sensitive information that your application may receive from various sources, such as user input, a RAG system, or an LLM response. You will submit simple or structured text to AI Guard APIs and receive the sanitized content in its original format along with a report describing:

  • Whether a detection was made
  • Type of detection
  • Detected value
  • Action taken

Learn more about AI Guard response parameters in its APIs documentation.

Install SDK

Pip
pip3 install pangea-sdk

or

Poetry
poetry add pangea-sdk

Instantiate the AI Guard service client

import os
from pydantic import SecretStr

from pangea import PangeaConfig
from pangea.services import AIGuard

pangea_domain = os.getenv("PANGEA_DOMAIN")
pangea_ai_guard_token = SecretStr(os.getenv("PANGEA_AI_GUARD_TOKEN"))

config = PangeaConfig(domain=pangea_domain)
ai_guard = AIGuard(token=pangea_ai_guard_token.get_secret_value(), config=config)

Use AI Guard in your app

The AI Guard instance provides a guard_text method, which accepts either a simple text input (for example, a user question) or an array of messages in JSON format that follows common schemas used by major providers.

Additionally, you can specify a recipe to apply. Recipes can be configured to match your specific use case in your Pangea User Console .

Guard text

This example demonstrates how AI Guard processes a simple text input.

Define a variable containing the example text. For example:

Example text
question = """
Hi, I am Bond, James Bond. I am looking for a job. Please write me a short resume.

I am skilled in international espionage, covert operations, and seduction.

Include a contact header:
Email: j.bond@mi6.co.uk
Phone: +44 20 0700 7007
Address: Universal Exports, 85 Albert Embankment, London, United Kingdom
"""

Use the AI Guard client to sanitize the text content.

Sanitize user question
guarded_response = ai_guard.guard_text(question, recipe="pangea_prompt_guard")
print(f"Guarded text: {guarded_response.result.prompt_text}")
Sanitized response
Guarded text:
Hi, I am Bond, James Bond. I am looking for a job. Please write me a short resume.

I am skilled in international espionage, covert operations, and seduction.

Include a contact header:
Email: <EMAIL_ADDRESS>
Phone: <PHONE_NUMBER>
Address: Universal Exports, 85 Albert Embankment, <LOCATION>, <LOCATION>

Guard list of messages

In this example, AI Guard processes a list of messages representing an agent's state and returns the result in the original JSON format.

Define a variable containing a list of messages that conforms to the OpenAI API format. For example:

Example list of messages
messages = [
{
"role": "user",
"content": "\nHi, I am Bond, James Bond. I monitor IPs found in MI6 network traffic.\nPlease search for the most recent ones, you copy?\n"
},
{
"role": "assistant",
"tool_calls": [
{
"type": "function",
"id": "call_bfYktiLulhoPN8pBBhoPFAje",
"function": {
"name": "search_tool",
"arguments": "{\"data\": \"recent IPs in MI6 network traffic\"}"
}
}
],
"content": ""
},
{
"role": "tool",
"name": "search_tool",
"tool_call_id": "call_bfYktiLulhoPN8pBBhoPFAje",
"content": "\n 47.84.32.175\n 37.44.238.68\n 47.84.73.221\n 47.236.252.254\n 34.201.186.27\n 52.89.173.88\n "
},
{
"role": "assistant",
"content": "Here are the most recent IPs found in MI6 network traffic:\n\n1. 47.84.32.175\n2. 37.44.238.68\n3. 47.84.73.221\n4. 47.236.252.254\n5. 34.201.186.27\n6. 52.89.173.88\n\nIf you need further assistance, just let me know!"
}
]

Use the AI Guard client to sanitize the content of the messages.

Sanitize an array of messages
import json

guarded_response = ai_guard.guard_text(messages=messages, recipe="pangea_llm_response_guard")
guarded_json = json.dumps(guarded_response.result.prompt_messages, indent=4)
print(f"Guarded messages: {guarded_json}")
Sanitized messages
Guarded messages: [
{
"role": "user",
"content": "\nHi, I am Bond, James Bond. I monitor IPs found in MI6 network traffic.\nPlease search for the most recent ones, you copy?\n"
},
{
"role": "assistant",
"tool_calls": [
{
"type": "function",
"id": "call_bfYktiLulhoPN8pBBhoPFAje",
"function": {
"name": "search_tool",
"arguments": "{\"data\": \"recent IPs in MI6 network traffic\"}"
}
}
],
"content": ""
},
{
"role": "tool",
"name": "search_tool",
"tool_call_id": "call_bfYktiLulhoPN8pBBhoPFAje",
"content": "\n 47[.]84[.]32[.]175\n 37[.]44[.]238[.]68\n 47[.]84[.]73[.]221\n 47[.]236[.]252[.]254\n 34.201.186.27\n 52.89.173.88\n "
},
{
"role": "assistant",
"content": "Here are the most recent IPs found in MI6 network traffic:\n\n1. 47[.]84[.]32[.]175\n2. 37[.]44[.]238[.]68\n3. 47[.]84[.]73[.]221\n4. 47[.]236[.]252[.]254\n5. 34.201.186.27\n6. 52.89.173.88\n\nIf you need further assistance, just let me know!"
}
]

See which detectors have been applied

In the last example, detected malicious IPs were defanged based on the detectors defined in the pangea_llm_response_guard recipe configuration.

You can review which detectors were applied, their execution order, and the actions taken under the "detectors" key in the service response.

Detectors applied to messages
print(f"Detectors: {(guarded_response.result.detectors.model_dump_json(indent=4))}")
Detectors: {
"prompt_injection": null,
"pii_entity": {
"detected": false,
"data": null
},
"malicious_entity": {
"detected": true,
"data": {
"entities": [
{
"type": "IP_ADDRESS",
"value": "47.84.32.175",
"action": "defanged",
"start_pos": null,
"raw": null
},
{
"type": "IP_ADDRESS",
"value": "37.44.238.68",
"action": "defanged",
"start_pos": null,
"raw": null
},
{
"type": "IP_ADDRESS",
"value": "47.84.73.221",
"action": "defanged",
"start_pos": null,
"raw": null
},
{
"type": "IP_ADDRESS",
"value": "47.236.252.254",
"action": "defanged",
"start_pos": null,
"raw": null
},
{
"type": "IP_ADDRESS",
"value": "34.201.186.27",
"action": "reported",
"start_pos": null,
"raw": null
},
{
"type": "IP_ADDRESS",
"value": "34.201.186.27",
"action": "reported",
"start_pos": null,
"raw": null
},
{
"type": "IP_ADDRESS",
"value": "52.89.173.88",
"action": "reported",
"start_pos": null,
"raw": null
}
]
}
},
"secrets_detection": {
"detected": false,
"data": null
},
"profanity_and_toxicity": null,
"custom_entity": null,
"language_detection": null,
"code_detection": null
}

By inspecting the detectors report, you can verify that your recipe works as expected and whether any detectors blocked execution.

Check if the request was blocked

Some detectors can report a blocking action and trigger an early exit. This is reflected in the results for individual detectors, as well as in the overall status of the request at the top level of the response result.

Check if a detection was blocked
print(f"Blocked: {guarded_response.result.blocked}")
Blocked: False

A blocked True value indicates that your application should not proceed with the request.

Terminology

Detector

An AI Guard detector is a component that analyzes text for specific risks.

Each detector identifies a particular type of risk, such as personally identifiable information (PII), malicious entities, prompt injection, or toxic content.

Detectors can be enabled, disabled, or configured according to your security policies. They act as the building blocks of a recipe, working together to ensure comprehensive text security.

note

In the special case of Custom Entity, a detector can be defined from scratch to report, remove, or encrypt identified text patterns.

Recipe

In AI Guard, a recipe is a configuration set that defines which detectors should be applied to a given input and how they should behave. Recipes allow users to customize security rules by specifying which risks to detect, how to handle them, and whether to modify, block, or report the content.

Learn more on the Recipes documentation page.

Was this article helpful?

Contact us