Skip to main content

Integrating Portkey with AI Guard

This tutorial guides you through installing and configuring the Portkey plugin to integrate with Pangea’s AI Guard service. You’ll learn how to set up Portkey to call AI Guard in both SaaS and Edge deployment scenarios, ensuring secure and optimized access to Pangea’s API.

AI Guard is a security service designed to protect Large Language Models (LLMs) from attacks such as prompt injections, data exfiltration, and malicious content submissions. Integrating with Portkey provides enhanced routing, traffic control, and security for AI Guard operations.

note

If you are looking to integrate Pangea within your Edge deployed environment, first start here. Once your Pange Edge services are deployed, you may use the respective domain to configure your Portkey integration.

Step 1: Add Pangea credentials to Portkey

To begin, you need to configure the Pangea plugin within Portkey by adding your credentials.

  1. Navigate to the Plugins page under Settings in Portkey.
  2. Locate Pangea integration and click the edit button.
  3. Enter your Pangea token and domain information.

Step 2: Add a Guardrail check for AI Guard

Guardrails help ensure API requests to AI Guard are inspected and secured according to your configurations.

  1. Navigate to the Guardrails page in Portkey.
  2. Search for Pangea AI Guard and click Add.
  3. Write a meaningful name in the top input box and (optionally) configure the recipe and debug settings as needed.
  4. Save your Guardrail to generate a Guardrail ID.

Guardrail Details

Check NameDescriptionParametersSupported Hooks
AI GuardAnalyze and redact text to avoid model manipulation and malicious contentrecipe (string), debug (boolean)beforeRequestHook, afterRequestHook

Step 3: Add Guardrail ID to Portkey config

With the Guardrail created and integrated, you need to include the Guardrail ID in your Portkey configuration.

  • When you save a Guardrail, you’ll get an associated Guardrail ID - add this ID to the before_request_hooks or after_request_hooks params in your Portkey Config
  • Save this Config and pass it along with any Portkey request you’re making!

Here’s an example configuration:

{
"retry": {
"attempts": 3
},
"cache": {
"mode": "simple"
},
"virtual_key": <your-virtual-key-id>,
"before_request_hooks": [
{
"id": <your-pangea-plugin-id>
}
]
}

Example usage in Python:

# Copyright 2021 Pangea Cyber Corporation
# Author: Pangea Cyber Corporation

from portkey_ai import Portkey
from portkey_ai.api_resources.types.chat_complete_type import ChatCompletions
from os import getenv
import openai
import json

# Construct a client with a virtual key

api_key = getenv("PORTKEY_API_KEY", "")
assert api_key
virtual_key = getenv("AWS_BEDROCK_VIRTUAL_KEY", "")
assert virtual_key
config_id = getenv("PORTKEY_CONFIG_ID")
assert config_id

portkey = Portkey(
api_key=api_key,
virtual_key=virtual_key,
config=config_id,
)

while True:
prompt = input("\nEnter your question: ")
try:
completion = portkey.chat.completions.create(
messages=[{"role": "user", "content": prompt}],
model="anthropic.claude-3-5-sonnet-20240620-v1:0",
)

if isinstance(completion, ChatCompletions) and completion.choices:
for choice in completion.choices:
if choice.message:
print(f"Response: {choice.message.content}")

hook_results = completion.get("hook_results", {})
if not isinstance(hook_results, dict):
continue
before_request_hooks = hook_results.get("before_request_hooks", [])
print("\tBefore request hooks:")
for rh in before_request_hooks:
print(f"\t\tGuardrail ID: {rh.get('id', None)}. Verdict: {rh.get('verdict', None)}")
else:
print(f"Response: {completion}")

except openai.APIStatusError as e:
print(f"\n\nRequest failed: {e.status_code}")
resp = e.response.json()
error_message = resp.get("error", {}).get("message", "")
print(f"error: {error_message}")

before_request_hooks = resp.get("hook_results", {}).get("before_request_hooks", [])
print("Before request hooks:")
for rh in before_request_hooks:
print(f"\tGuardrail ID: {rh.get('id', None)}. Verdict: {rh.get('verdict', None)}")
print("\tChecks:")
for check in rh.get("checks", []):
result = json.dumps(check.get("data", {}).get("detectors", {}))
print(f"\t\t- {result}")

Your requests are now guarded by Pangea AI Guard, with detailed logs available in both Portkey and your Pangea dashboard.

Step 4: Test the integration

After completing the setup, verify that API requests are routed and inspected correctly.

Test your deployment

Since Portkey acts as a gateway and you have enabled added the Guardrail ID to your Portkey config, you can simply call your Portkey app and it will redirect traffic through Pangea where needed. Here is an example:

curl https://api.portkey.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-provider: openai" \
-H "x-portkey-config: $CONFIG_ID" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{
"role": "user",
"content": "Hello!"
}]
}'

Troubleshooting

Common issues and resolutions include:

  • Authentication Errors: Ensure your API keys are configured correctly.
  • Network Connectivity Issues: Verify that Portkey can access the AI Guard service.
  • Timeouts: Adjust timeout settings to handle slower network conditions or heavy traffic.

Was this article helpful?

Contact us