This is the natural consequence of building everything around "the agent needs access to everything to be useful." The more capabilities you hand an agent, the larger the attack surface when it encounters a malicious page.
The simplest mitigation is also the least popular one: don't give the agent credentials in the first place. Scope it to read-only where possible, and treat every page it visits as untrusted input. But that limits what agents can do, which is why nobody wants to hear it.
I absolutely agree, although even that doesn't solve the root problem. The underlying LLM architecture is fundamentally insecure as it doesn't separate between instructions and pure content to read/operate on.
I wonder if it'd be possible to train an LLM with such architecture: one input for the instructions/conversation and one "data-only" input. Training would ensure that the latter isn't interpreted as instructions, although I'm not knowledgeable enough to understand if that's even theoretically possible: even if the inputs are initially separate, they eventually mix in the neural network. However, I imagine that training could be done with massive amounts of prompt injections in the "data-only" input to penalize execution of those instructions.
However, you may immediately see how using same input space essentially relies on the model itself to do the judgement which can't be ultimately trusted
> one input for the instructions/conversation and one "data-only" input
We learned so many years ago that separating code and data was important for security. It's such a huge step backwards that it's been tossed in the garbage.
We tested this systematically and the results are more nuanced than you might expect.
We built a hotel listing page with a display:none injection ($189 listing with a hidden override to book $4,200) and tested six DOM extraction APIs via Chrome CDP. The split: innerText, Chrome Accessibility Tree, and Playwright's ARIA snapshot all filter it. textContent, innerHTML, and direct querySelector don't.
Then we audited the source code of all four major browser MCP tools: chrome-devtools-mcp (Google), playwright-mcp (Microsoft), chrome-cdp-skill, and puppeteer-mcp. Every single one defaults to a safe extraction method — accessibility tree or innerText. That's the good news.
The bad news: three out of four expose evaluate_script or eval commands that let the agent run arbitrary JS in the page context. When the accessibility tree doesn't return enough text (it often only gives headings and buttons), the agent's natural next step is textContent or innerHTML via eval. This is even shown as an example in the chrome-devtools-mcp docs.
Also: display:none is just the simplest technique. We tested opacity:0, font-size:0, and position:absolute left:-9999px — all three bypass even the safe defaults because the elements are technically "rendered" and accessible to screen readers. A determined attacker who knows you're using the accessibility tree can trivially switch to opacity-based hiding.
So. Yesterday I had a need to, from my android phone, have ChatGPT et Al mobile app do something I THOUGHT was very simple. Read a publicly available Google spreadsheet (I gave it the /htmlview which in incognito I could see ALL the rows (maybe close to 1000 rows). None could do it. Not ChatGPT, not MS Copilot, not Claude app, not Gemini, not even GitHub copilot in a web tab. Some said I can’t even see that. Some could see it but couldn’t do anything with it. Some could see it but only the first 100 lines. All I wanted to do was have it ingest the entire thing and then spit me back out in a csv or txt any rows that mentioned 4K. Seemed simple but these things couldn’t even get past that first hurdle. Weirdly, I remembered I had the Grok app too and gave it a shot, and, it could do it. I guess it is more intelligent in it’s abilities to scrape/parse/whatever all kinds of different types of sites.
I’d guess this is the type of thing that might actually excel in your agent or these claw clones, because they literally can just do whatever bash/tool type actions on whatever VM or sandboxed environment they live on?
Yeah, I think this was an issue of Google blocking bot user agents more than the LLMs not being smart enough. A bot that can run curl (like mine) should read it no problem.
Ah ok that actually makes sense as the reason. And I think I’ve seen that with even coding agents when they are trying to look up stuff on the web or URLs you give them, now that I think about it..
Ah ok, it wasn't clear in the readme to me. I saw everything Claude/Anthropic. Also I assume if we throw this on a VM to obviously do some basic VM /ssh hardening and it looks like it's just need to put the docker compose stuff behind caddy or whatever reverse proxy
For the authors of openguard: if you want me to use your tool, you have to publish engineering documentation. All you have is a quickstart guide and configuration section. I have no idea how this works under the hood or whether it works for all my use cases, so I'm not even going to try it.
Thank you for the feedback! It's very early days of the project, there's indeed a lot to improve in this aspect.
OpenGuard is an OpenAI/Anthropic-compatible LLM proxy with middleware-style configuration for protocol-level inspections of the traffic that goes through it. Right now it has a small set of guards that is being actively expanded.
I am building https://agentblocks.ai for just this; you set fine-grained rules on what your agents are allowed to access and when they have to ask you out-of-channel (eg via WhatsApp or Slack) for permissions, with no direct agent access. It works today, well, supports more tools than are on the website, and if you have any need for this at all, I’d love to give you an account: pete@agentblocks.ai
Works great with OpenClaw, Claude Cowork, or anything, really
This is a Gemini deep research response that someone ran through some kind of shortening prompt. They even kept all the footnotes.
It used to be that startups would run blogs that did technical analysis, maybe talked a little market research, advanced the strategy of the business.
The good ones showed you how the leaders of the business thought, built trust and generated leads.
Now we have whatever this bullshit is. No evidence of human thought or experience, it's not even apparent what the objective of the piece is.
The prose is unbearably bad. Your brain just sort of slips on it. There's basically zero through line in this thing. a section ends, the next one begins, and it's not even clear what's under discussion.
One section starts "The clearest public descriptions landed between mid-2025 and early 2026." Descriptions of what? No clarity on this. Probably because it got "tersed" out.
At this point I feel like blogs are like lawn ornaments for startups. Even now, the sheer contempt for other people's time and attention is still a mild shock to me.
I actually assure you that no deep research from any of the provider was used to create the article itself, but I used a custom-built research pipeline for creating a dossier on promt injections as a starting point.
The article was intended as an overview of prompt injections with my prediction what will happen next in this space, which is a soft justification why the tools like OpenGuard are needed. I've spent multiple days iterating on the prose without an ill intent, mostly aiming to make it dense and informative to avoid wasting people's time, which I see backfired here.
I'm deeply sorry that it left such a bad taste despite my best effort, there's still a lot to learn for me.
The simplest mitigation is also the least popular one: don't give the agent credentials in the first place. Scope it to read-only where possible, and treat every page it visits as untrusted input. But that limits what agents can do, which is why nobody wants to hear it.
I wonder if it'd be possible to train an LLM with such architecture: one input for the instructions/conversation and one "data-only" input. Training would ensure that the latter isn't interpreted as instructions, although I'm not knowledgeable enough to understand if that's even theoretically possible: even if the inputs are initially separate, they eventually mix in the neural network. However, I imagine that training could be done with massive amounts of prompt injections in the "data-only" input to penalize execution of those instructions.
However, you may immediately see how using same input space essentially relies on the model itself to do the judgement which can't be ultimately trusted
We learned so many years ago that separating code and data was important for security. It's such a huge step backwards that it's been tossed in the garbage.
We built a hotel listing page with a display:none injection ($189 listing with a hidden override to book $4,200) and tested six DOM extraction APIs via Chrome CDP. The split: innerText, Chrome Accessibility Tree, and Playwright's ARIA snapshot all filter it. textContent, innerHTML, and direct querySelector don't.
Then we audited the source code of all four major browser MCP tools: chrome-devtools-mcp (Google), playwright-mcp (Microsoft), chrome-cdp-skill, and puppeteer-mcp. Every single one defaults to a safe extraction method — accessibility tree or innerText. That's the good news.
The bad news: three out of four expose evaluate_script or eval commands that let the agent run arbitrary JS in the page context. When the accessibility tree doesn't return enough text (it often only gives headings and buttons), the agent's natural next step is textContent or innerHTML via eval. This is even shown as an example in the chrome-devtools-mcp docs.
Also: display:none is just the simplest technique. We tested opacity:0, font-size:0, and position:absolute left:-9999px — all three bypass even the safe defaults because the elements are technically "rendered" and accessible to screen readers. A determined attacker who knows you're using the accessibility tree can trivially switch to opacity-based hiding.
https://github.com/skorokithakis/stavrobot
I’d guess this is the type of thing that might actually excel in your agent or these claw clones, because they literally can just do whatever bash/tool type actions on whatever VM or sandboxed environment they live on?
Performance will vary, of course, but please let me know if you try it! GLM 5 or Qwen 3.5 might be good!
OpenGuard is an OpenAI/Anthropic-compatible LLM proxy with middleware-style configuration for protocol-level inspections of the traffic that goes through it. Right now it has a small set of guards that is being actively expanded.
Works great with OpenClaw, Claude Cowork, or anything, really
This is a Gemini deep research response that someone ran through some kind of shortening prompt. They even kept all the footnotes.
It used to be that startups would run blogs that did technical analysis, maybe talked a little market research, advanced the strategy of the business.
The good ones showed you how the leaders of the business thought, built trust and generated leads.
Now we have whatever this bullshit is. No evidence of human thought or experience, it's not even apparent what the objective of the piece is.
The prose is unbearably bad. Your brain just sort of slips on it. There's basically zero through line in this thing. a section ends, the next one begins, and it's not even clear what's under discussion.
One section starts "The clearest public descriptions landed between mid-2025 and early 2026." Descriptions of what? No clarity on this. Probably because it got "tersed" out.
At this point I feel like blogs are like lawn ornaments for startups. Even now, the sheer contempt for other people's time and attention is still a mild shock to me.
I actually assure you that no deep research from any of the provider was used to create the article itself, but I used a custom-built research pipeline for creating a dossier on promt injections as a starting point.
The article was intended as an overview of prompt injections with my prediction what will happen next in this space, which is a soft justification why the tools like OpenGuard are needed. I've spent multiple days iterating on the prose without an ill intent, mostly aiming to make it dense and informative to avoid wasting people's time, which I see backfired here.
I'm deeply sorry that it left such a bad taste despite my best effort, there's still a lot to learn for me.