Why On-Device Redaction Matters
Every day, people paste sensitive information into AI tools without a second thought. A freelancer summarizes a client contract in ChatGPT. A nurse asks Claude to help phrase a treatment update. A parent uploads school paperwork to get help filling it out. The AI does its job. But the moment you hit Enter, every name, address, and personal detail in that prompt travels to servers you don't control, in a jurisdiction you didn't choose, under terms you probably didn't read.
Most people accept this because the alternative seems to be not using AI at all. But that's a false choice. The question isn't whether AI is useful. It clearly is. The question is whether using it requires giving up control over the sensitive information in your prompts.
It doesn't.
What happens when you type into an AI chat
When you send a message to ChatGPT, Claude, or Gemini, your entire prompt is transmitted to the provider's servers. Every name, date, address, and account number in it. Depending on the provider and plan, that content may be logged, stored, or used to improve future models.
For most casual use, this is fine. But the moment your prompt contains personal data — yours or someone else's — it's on a server you don't own, handled under policies you didn't negotiate.
The cloud redaction paradox
Some services try to solve this by stripping sensitive data before it reaches AI. The pitch sounds reasonable: route your content through their system, they remove the sensitive parts, and a clean version goes forward.
But think about what this actually requires. To redact your data in the cloud, you first have to upload it — unredacted — to the cloud. The redaction service sees the most sensitive version of your content: the original, with every name, address, and identifier intact. The AI provider then receives a cleaned version.
So instead of one party handling your data, there are now two. And the one that sees the raw, unredacted version is almost certainly smaller, less audited, and under less regulatory scrutiny than the major AI providers.
What "on-device" actually means
On-device redaction works differently, detection and replacement happen on your machine. Your data never leaves. Nothing is uploaded. No API call to a redaction service. No temporary copy on someone else's infrastructure.
When RedMatiq processes a document or intercepts a chat message, the entire pipeline runs locally. Entity recognition models identify names, places, and organizations. Pattern matchers catch structured data like IBANs and phone numbers. Placeholders replace the originals. The AI service receives "[PERSON_1] signed the agreement on [DATE_1]" instead of "Sarah Miller signed the agreement on March 12, 2024."
This is AI too, running on your hardware. The difference is that it works for you, not for a vendor's data pipeline.
The AI produces the same result either way
AI models reason about structure and relationships. They don't need to know who's in your prompt to understand what you're asking. Replace "John Smith, DOB 1984-03-12" with "[PERSON_1]" and the summary is the same. The contract review, the compliance check, the email draft: all the same. A well-redacted prompt preserves everything the model needs and removes everything it doesn't.
There are edge cases where identity matters: cross-referencing a person against public records, or medical contexts where demographics affect treatment. RedMatiq lets you choose what to redact and what to keep for exactly this reason.
For the work most people do with AI every day, the risk changes but the output doesn't.
You can't unsend it
Once data hits an external server, you can't take it back. Delete the conversation and the request still happened. Server logs may persist. Training pipelines may have ingested it.
"But my provider says they don't train on my data." That may be true today. AI provider policies shift regularly. Most major providers now offer enterprise tiers with zero-retention guarantees, and those are worth having. But a contractual guarantee is a legal remedy after something goes wrong. It's not a technical barrier that stops it from happening.
It doesn't take malicious intent. Breaches happen to companies that spend hundreds of millions on security. If your data sits on a server, it's a target. The question isn't whether the provider wants to protect it, but whether they can, indefinitely.
On-device redaction sidesteps the question entirely. The data was never there.
It's not just your data
That prompt doesn't just contain your information. It contains your client's name, your patient's diagnosis, your child's school record. None of them consented to having their data sent to an AI service. None of them were asked.
For individuals, this is a matter of respect and good judgment. For professionals, it's a matter of law. GDPR, HIPAA, and similar regulations require data minimization — using the least amount of personal data necessary for the task. Sending real names and identifiers when a placeholder produces the same result raises serious compliance questions.
For regulated professions — legal, medical, financial, HR — this isn't about preference. It's about the duty of care you owe to the people whose data you handle.
What about accuracy?
If the redaction happens locally with smaller models, how reliable is it?
No system catches everything. But RedMatiq combines neural entity recognition with deterministic pattern matching for structured data like phone numbers, IBANs, and email addresses. The pattern matching doesn't depend on model judgment at all.
Even if a cloud model is marginally more accurate, it requires uploading all your data to achieve that margin. A local model that sends nothing to a third party is the better tradeoff. And because RedMatiq highlights what it's redacting before you send, you can always review and adjust.
For concrete numbers on how this layered approach performs, see our benchmark against OpenAI's Privacy Filter.
Why a browser extension
When you type into ChatGPT or Claude, the redaction must happen before the message is sent — not after. A browser extension sits at exactly the right point: between you and the API. It intercepts the prompt, replaces sensitive entities with placeholders, and forwards the clean version. When the response comes back, it restores the originals in your view.
This is what RedMatiq's Safari extension does. The redaction is invisible, automatic, and entirely local. You see your original text. The AI sees placeholders. The gap between those two versions is privacy you don't have to think about.
The safest data in a breach is the data that was never there
Every privacy strategy eventually comes down to trust: provider terms, vendor promises, enterprise agreements. On-device processing changes the equation. Instead of trusting someone else to protect your data, you remove the need for trust entirely. The data stays on your machine because it was never sent anywhere else.
The best protection against a breach isn't a stronger vault — it's having nothing in it when it gets cracked.
No accounts, no telemetry, no server-side processing. And you don't have to take our word for it: open your browser's developer tools, inspect the outgoing requests, and see for yourself.
Related reading
- Using AI with Confidential Documents — A practical guide to strategies for using AI when your documents contain sensitive data.
- What Your PDF Knows About You — The metadata your documents carry that you didn't put there.
- Sensitive Data in the Modern Workplace — Five industries, five statistics, one common problem.
Your data, your machine
RedMatiq strips sensitive information from documents and AI chats — locally, before anything leaves your device.