Back to Blog

Trusting GPT - Protecting Apps from PII Leakage

Rob Truesdell
Rob Truesdell

Undoubtedly, generative AI technology has already left its fingerprint on our society and tech lives and will change the trajectory of software going forward. Debates will continue regarding its accuracy, but that will only improve, and likely faster than we all think. Therefore I believe it's here to stay. While generative AI and the popular ChatGPT continue to improve, some of us in the security realm also focus on the trustworthiness and security of using ChatGPT. CISOs and many security-minded developers are also exploring this topic. Users, particularly users of enterprise apps, will want to use this (ChatGPT), but how do we secure it? What are the risks? What does securing it even mean? We can be proactive about this and apply some of the principles we already know in security to develop a framework and systematic approach to securing the use of ChatGPT.

A hot topic and starting point for working on this problem is addressing the potential for PII leakage to ChatGPT or OpenAI. It's too easy for any application user to submit PII as an input to any app, and there's added risk to submitting PII to something like ChatGPT. People make mistakes, ranging from entering a driver's license number to their own IP address or, in the worst case, their social security number (SSN). The problem with this in the context of ChatGPT is that users of ChatGPT need to be made aware of the usage policy or how ChatGPT will handle your data. Apply that problem statement to an enterprise that may also be handling their customer's data, and you have just multiplied the significance of the problem and brought in legal concerns around usage policy violations and GDPR violations. The truth of the matter is that we just "don't know what we don't know" when it comes to the life of that data after the user submits it as an input to ChatGPT or OpenAI - we don't know yet where it (data, particularly PII data) goes or which policies that may violate.

The threat becomes even more apparent when interacting with applications that use frameworks such as LLAMA index which facilitate interactions between public LLMs and private data stores to perform a sort of pseudo training for chat apps. If you create an app that takes conversational data to further expand an index, you’re even more open to the threat of malicious injection. The effect is compounded when allowing these models to provide instructions to internal applications such as Github copilot which can directly interact with the OS.

The problem is twofold in that both training time & runtime data must be monitored.

So what can we do? Our recommended approach for addressing the PII concern is to wrap all input/output activity with ChatGPT in Pangea's Redact API. This will ensure that all inputs submitted to ChatGPT will be devoid of sensitive PII. This is configured via rulesets in the Pangea Redact service. Pangea Redact can filter out the following types of information:

  • PII such as email address, nationality, religious group, location, and name

  • US identification numbers such as US driver's license number, US ITIN, US passport, and SSN

  • Medical identification numbers such as medical license number, UK national health service number, and Australian Medicare number

  • Several types of other international identification numbers

  • Secrets such as slack tokens, RSA private keys, SSH private keys, PGP private key blocks, AWS access keys, and several different key types

  • Custom rulesets defined by the developer

To view all the different types of rules, you can review the configuration of the redact service and the documentation.

To help developers with this, we have a Next.js sample app on GitHub that shows exactly how ChatGPT and Pangea's Redact service can be integrated. The high-level concept is easy to understand:

  1. The user submits an input to ChatGPT via the sample app

  2. The raw input is routed through Pangea's Redact service via API

  3. The Pangea Redact service returns a clean version of the input devoid of the types of sensitive information enabled in the service

  4. The sample app submits the clean version of the input text to ChatGPT

At this point, you can trust that the inputs are clean and the user is not leaking sensitive data. But there's more!

  1. The response from ChatGPT is returned to the sample app and parsed to discover embedded links and domains

  2. Domains within the links are submitted to Pangea's Domain Intel service, which identifies malicious domains through a reputation database

  3. Any domains classified as malicious are redacted from the response so that the end user can never click on any malicious domains

  4. To conclude, the response is returned to the user within the sample app

Other services this sample app takes advantage of from Pangea includes Authentication and Secure Audit Log, which are staples to any secure application and must be considered by every developer for good security hygiene in your app.

This is not resolving every security topic associated with ChatGPT and generative AI, but it is a start. And it contributes to the discussion inside the developer and security communities around possible approaches to securing how we interact with this generative AI.

In closing, let's all continue experimenting with ChatGPT and stay tuned for more samples from Pangea on securing the interaction with generative AI tech. Check out this sample app and let us know your thoughts in our Pangea-builders Slack community. If you have other great ideas or use cases around securing the interaction with ChatGPT, consider submitting to our Pangea Securathon hackathon!

Get updates in your inbox and subscribe to our newsletter

background landmass

We were recognized by Gartner®!

Pangea is a Sample Vendor for Composable Security APIs in the 2024 App Sec Hype Cycle report