Skip to main content

AI Guard | Node.js SDK

AI Guard

constructor(token: string, config: PangeaConfig): AIGuardService

Creates a new AIGuardService with the given Pangea API token and configuration.

string

Pangea API token.

Configuration.

AIGuardService
const config = new PangeaConfig({ domain: "pangea_domain" });
const aiGuard = new AIGuardService("pangea_token", config);

Text Guard for scanning LLM inputs and outputs

guardText(request: [object Object]): Promise<PangeaResponse<TextGuardResult<void>>>

Analyze and redact text to avoid manipulation of the model, addition of malicious content, and other undesirable data transfers.

Request parameters.

Promise<PangeaResponse<TextGuardResult<void>>>
const response = await aiGuard.guardText({
  text: "foobar",
});

Interface Detector

Detector

null | T
boolean

Interface LogFields

LogFields

string

Origin or source application of the event

string

Stores supplementary details related to the event

string

Model used to perform the event

string

IP address of user or app or agent

string

Tools used to perform the event

Interface MaliciousEntity

MaliciousEntity

string
string
string

number

Interface PIIEntity

PIIEntity

string
string
string

number

Interface TextGuardRequest

TextGuardRequest

boolean

Setting this value to true will provide a detailed analysis of the text data

string

Short string hint for the LLM Provider information

LogFields

Additional fields to include in activity log

[object Object] | pangea_prompt_guard

Recipe key of a configuration of data types and settings defined in the Pangea User Console. It specifies the rules that are to be applied to the text, such as defang malicious URLs.

Interface TextGuardResult

TextGuardResult

Result of the recipe analyzing and input prompt.

T

Updated structured prompt, if applicable.

string

Updated prompt text, if applicable.