Skip to main content

Employee AI Usage

As generative AI tools like ChatGPT, Claude, and Gemini become more accessible, employees are increasingly using them in ways that may introduce security, privacy, and compliance risks. These tools are often accessed via personal accounts, unmanaged devices, or unsanctioned browser sessions - creating shadow AI activity that is invisible to traditional IT and security controls.

AIDR helps security teams surface and monitor this activity by providing targeted visibility into AI usage patterns and content risks at the user level.

Challenges

  • Employees access public LLMs through unmanaged or unmonitored browser sessions.
  • IT and security teams lack visibility into which AI tools are being used, who is using them, what data is being shared, and for what purpose.
  • Sensitive content (e.g., customer data, financial documents, source code) may be submitted to AI tools without oversight.
  • Shared data may be exposed in LLM-generated outputs to unauthorized personnel.

Example scenarios

  • An employee uploads a customer spreadsheet to ChatGPT using a managed browser. The Pangea Browser Sensor captures the interaction with the AI tool, including the submitted prompt and returned response, and sends it to AIDR for analysis. AIDR detects the presence of sensitive data in the exchange and can redact it before submission.

  • A retrieval-augmented generation (RAG) system with scoped access to enterprise data submits internal queries to an LLM. AIDR detects a prompt injection attempt intended to bypass guardrails and access restricted information, and blocks the request. If PII is present in the output, AIDR redacts it before delivery.

  • A team adopts a new AI tool without approval. A gateway sensor logs the outbound request to the AI provider, inspects the full request-response exchange for sensitive content or threats, and enforces policy by blocking, redacting, or defanging unsafe content - including malicious or suspicious links in model responses.

How AIDR helps

AIDR enables security teams to monitor employee AI activity, detect risks, and enforce policies that block or redact unsafe requests. The browser and gateway sensors provide visibility into usage patterns, while the policy engine enables real-time enforcement.

For more information, see AIDR Capabilities .

Deployment options

To monitor and control employee AI usage, deploy AIDR in one or more of the following ways:

  • Install the Pangea Browser sensor on managed Chrome browsers to detect web-based AI activity and enforce content policies on known AI tool domains.
  • Use the AIDR SDKs or APIs to instrument internal applications and monitor employee-driven AI interactions within custom tools or enterprise software.
  • Deploy a Gateway sensor (for example, Kong, F5, LiteLLM) to inspect AI traffic at the network edge, enforce policies on outbound requests and responses, and detect access to unapproved tools.
  • Use the Agentic sensor (MCP Proxy) to capture prompt activity and model responses from autonomous agents or internal AI assistants that interact with employees or internal systems.
  • Enable the CSP sensor for supported cloud platforms (such as AWS Bedrock) to surface employee-driven AI activity within sanctioned cloud environments. Detect usage across functions, services, or accounts where model invocation logging is enabled.
  • Use OpenTelemetry instrumentation to forward telemetry from applications and services where employees may use generative AI, enabling AIDR to perform centralized analysis and correlation.
  • Forward AIDR logs to your SIEM (for example, Splunk or CrowdStrike Falcon Next-Gen SIEM) to enrich investigations, support alerting, and correlate employee activity across your security stack.

Summary

AIDR provides actionable visibility into employee use of generative AI tools and enforces policies to reduce risk.

Was this article helpful?

Contact us

Secure AI from cloud to code

636 Ramona St Palo Alto, CA 94301

©2025 Pangea. All rights reserved.

PrivacyYour Privacy ChoicesTerms of UseLegal Notices
Contact Us