20 comments

  • postalcoder 7 hours ago
    Very neat, but recently I've tried my best to reduce my extension usage across all apps (browsers/ide).

    I do something similar locally by manually specifying all the things I want scrubbed/replaced and having keyboard maestro run a script on my system keyboard whenever doing a paste operation that's mapped to `hyperkey + v`. The plus side of this is that the paste is instant. The latency introduced by even the littlest of inference is enough friction to make you want to ditch the process entirely.

    Another plus of the non-extension solution is that it's application agnostic.

    • informal007 7 hours ago
      Smart idea! Thanks for sharing.

      If we move the detection and modification process from paste to copy operation, that will reduce in-use latency

      • postalcoder 5 hours ago
        That's a great idea. My original excuse to not do that was because I copy so many things but, duh, I could just key the sanitizing copy to `hyperkey + c`.
    • bjord 5 hours ago
      out of curiosity, what's the motivation behind trying to reduce your extension usage everywhere?
      • postalcoder 5 hours ago
        Multiple things: 1) extensions are overly permissive, 2) so many of them are sold to shady entities without peep from the developer, and 3) it's never been easier to generate my own tooling.
        • bjord 4 hours ago
          brutal. I just typed out a much longer response and lost it when my time wasting extension saw the url change (time for a text area cache extension?)

          you might find this useful: https://github.com/classvsoftware/under-new-management

          my port (and now fork): https://github.com/maxtheaxe/under-new-management-firefox

          they currently (PRs are welcome!) only check listing info. mine doesn't route requests through an external (non addon store) server.

          a couple PRs are overdue on mine due to linting making the diffs impossible. I'll get to it. (see the wxt-migration branch)

        • sgc 4 hours ago
          I just download the extension file, check it out, and install it locally. No worries about future updates until something breaks (doesn't tend to happen).
          • bjord 4 hours ago
            at least on firefox, you can also just disallow automatic updates
            • sgc 1 hour ago
              I want to see the source, and I don't want to worry about future browser changes messing with my settings..
  • ttul 5 hours ago
    This should be a native feature of the native chat apps for all major LLM providers. There’s no reason why PII can’t be masked from the API endpoint and then replaced again when the LLM responds. “Mary Smith” becomes “Samantha Robertson” and then back to “Mary Smith” on responses from the LLM. A small local model (such as the BERT model in this project) detects the PII.

    Something like this would greatly increase end user confidence. PII in the input could be highlighted so the user knows what is being hidden from the LLM.

  • throwaway613745 6 hours ago
    Maybe you should fix your logging to not output secrets in plaintext? Every single modern logging utility has this ability.
  • pondemic 6 hours ago
    Any plans to make the extension perform a replacement of whatever’s flagged with dummy data? Knowing I have sensitive data is usually not a problem, but constantly needing to replace or remove it is, particularly with larger token counts
  • NJL3000 2 hours ago
    This is a great idea of using a BERT model for DLP at the door. Have you thought integrating this into semantic router as an option leaving the look-ahead ? Maybe a smaller code base ?
  • mentalgear 4 hours ago
  • willwade 8 hours ago
    I wonder if this would have been useful https://github.com/microsoft/presidio - its heavy but looks really good. There is a lite version..
    • shaoz 2 hours ago
      I've used it, lots of false positives out of the box, you need to do a ton of tuning or put a transformer/BERT model with it, but then at that point it's basically the same thing as the OP's project.
    • threecheese 4 hours ago
      Looks like it uses Googles Langextract, which uses only LLMs for NLP, while OP is using a small NER model that runs locally.
    • winchester6788 6 hours ago
      full of false positives though. but definitely good for some types of entities and regexes
  • gnarlouse 1 hour ago
    I'd like to see this as a Windsurf plugin.
  • sailfast 7 hours ago
    How do you prevent these models from reading secrets in your repos locally?

    It’s one thing for the ENVs to be user pasted but typically you’re also giving the bots access to your file system to interrogate and understand them right? Does this also block that access for ENVs by detecting them and doing granular permissions?

    • woodrowbarlow 1 hour ago
      by putting secrets in your environment instead of in your files, and running AI tools in a dedicated environment that has its own set of limited and revocable secrets.
  • dwa3592 7 hours ago
  • greenbeans12 6 hours ago
    This is pretty cool. I barely use the web UIs for LLMs anymore. Any way you could make a wrapper for Claude Code/Cursor/Gemini CLI? Ideally it works like github push protection in GH advanced security.
  • cjonas 8 hours ago
    Curious about how much latency this adds (per input token)? Obviously depends on your computer, but it's it ~10s or ~1s?

    Also, how does this deal with inquiries when piece of PII is important to the task itself? I assume you just have to turn it off?

  • itopaloglu83 8 hours ago
    It wasn’t very clear in the video, does it trigger on paste event or when the page is activated?

    There are a lot of websites that scans the clipboard to improve user experience, but also pose a great risk to users privacy.

  • fmkamchatka 7 hours ago
    Could this run at the network level (like TripMode)? So it would catch usage from web based apps but also the ChatGPT app, Codex CLI etc?
    • p_ing 7 hours ago
      Deploy a TLS interceptor (forward proxy). There are many out there, both free and paid for solutions; there are also agent-based endpoint solutions like Netskope which do this so you don't have to route traffic through an internal device.
    • robertinom 7 hours ago
      That would be a great way to get some revenue from "enterprise" customers!
  • cedws 5 hours ago
    Anything like this for Claude Code/calls to OpenRouter?
  • sciencesama 6 hours ago
    Develop a pihole style adblock
  • maddmann 5 hours ago
    Really good idea!
  • jedisct1 7 hours ago
    LLMs don't need your secret tokens (but MCP servers hand them over anyway): https://00f.net/2025/06/16/leaky-mcp-servers/

    Encrypting sensitive data can be more useful than blocking entire requests, as LLMs can reason about that data even without seeing it in plain text.

    The ipcrypt-pfx and uricrypt prefix-preserving schemes have been designed for that purpose.

  • willwade 8 hours ago
    can i have this between my machine and git please.. Like its twice now I've commmited .env* and totally passed me by (usually because its to a private repo..) then later on we/someone clears down the files.. and forgets to rewrite git history before pushing live.. it should never have got there in the first place.. (I wish github did a scan before making a repo public..)
    • mh- 7 hours ago
      You can use git hooks. Pre-commit specifically.

      https://git-scm.com/docs/githooks

    • acheong08 8 hours ago
      GitHub does warn you when you have API keys in your repo. Alternatively, there are CLI tools such as TruffleHog you can put in pre-commit hooks to run before commits automatically
    • hombre_fatal 7 hours ago
      At least you can put .env in the global gitignore. I haven’t committed DS_Store in 15 years because of it - its secrets will die with me.
    • PunchyHamster 6 hours ago
      aside from already mentioned hooks you can add global .gitignore for .env files