Skip to main content

Prompt Guard | Node.js SDK

Prompt Guard

constructor(token: string, config: PangeaConfig): PromptGuardService

Creates a new PromptGuardService with the given Pangea API token and configuration.

string

Pangea API token.

Configuration.

PromptGuardService
const config = new PangeaConfig({ domain: "pangea_domain" });
const promptGuard = new PromptGuardService("pangea_token", config);

Guard

guard(request: GuardRequest): Promise<PangeaResponse<GuardResult>>

Undocumented.

GuardRequest

Request parameters.

Promise<PangeaResponse<GuardResult>>
const response = await promptGuard.guard({
  messages: [{"role": "user", "content": "text"}]
});

Interface Classification

Classification

string

Classification category

number

Confidence score for the classification

boolean

Classification detection result

Interface GuardRequest

GuardRequest

Prompt content and role array. The content is the text that will be analyzed for redaction.

Specific analyzers to be used in the call

boolean

Boolean to enable classification of the content

Interface GuardResult

GuardResult

Array<Classification>

List of classification results with labels and confidence scores

number

Percent of confidence in the detection result, ranging from 0 to 100

boolean

Boolean response for if the prompt was considered malicious or not

| direct | indirect

Type of analysis, either direct or indirect

string

Prompt Analyzers for identifying and rejecting properties of prompts

string

Extra information about the detection result

Interface Message

Message

string
string