Defending Against Prompt Injection: Insights from 300K attacks in 30 days.

Read the report

Back to Blog

Malware

Bypassing AI Malware Analysis with Prompt Injection
Joey Melo
Joey Melo
Bypassing AI Malware Analysis with Prompt Injection

It's clear that AI is rapidly becoming a core component of nearly every software application, and malware scanners are certainly no exception. But what new attack vectors does this integration open up? I recently experimented with the concept of deve...

Safely Dealing with Files
Bruce McCorkendale
Bruce McCorkendale
Safely Dealing with Files

Introducing Pangea File Scan Have you ever been working on an app that accepts file uploads? What does your app do with those files? Where do those files come from? Where do they go? Who handles those files, and what do they do with them? Is it ...

Security Guardrails for AI Applications.
SOC 2 Type I icon

SOC 2 Type 2

HIPAA compliance icon

HIPAA Compliant

ISO/IEC 27001 compliance icon

ISO/IEC 27001

ISO/IEC 27701 compliance icon

ISO/IEC 27701

Platform

AI Security Platform

Use Cases

Employee AI usage
Homegrown AI Apps

Products

AI Detection & Response
AI Application Guardrails
AI Red Teaming

AI Product Security Workshop

Pangea Labs

AI Security Research
Prompt Injection Taxonomy
Prompt Injection Challenge

Explore

Blog
Startup Program
Technologies

Connect

News & Events

Documentation

Documentation
Getting Started Guide
Admin Guide
Tutorials
Postman Collections
API Reference
SDK Reference

Company

About Us
Careers

Service Status

Trust Center

© 2025 Pangea. All rights reserved.

636 Ramona St, Palo Alto, CA 94301

PrivacyTerms of UseYour Privacy ChoicesContact us
GitHub iconLinkedIn iconFacebook icon

Outsmart our AI. Play now

Play our AI Escape Room Challenge to test your prompt injection skills.

Register Now