Deno Sandbox

(deno.com)

319 points | by johnspurlock 7 hours ago

28 comments

  • simonw 6 hours ago
    Note that you don't need to use Deno or JavaScript at all to use this product. Here's their Python client SDK: https://pypi.org/project/deno-sandbox/

      from deno_sandbox import DenoDeploy
      
      sdk = DenoDeploy()
      
      with sdk.sandbox.create() as sb:
          # Run a shell command
          process = sb.spawn("echo", args=["Hello from the sandbox!"])
          process.wait()
      
          # Write and read files
          sb.fs.write_text_file("/tmp/example.txt", "Hello, World!")
          content = sb.fs.read_text_file("/tmp/example.txt")
          print(content)
    
    Looks like the API protocol itself uses websockets: https://tools.simonwillison.net/zip-wheel-explorer?package=d...
    • koakuma-chan 14 minutes ago
      Because the sandbox is on their cloud, not on your local machine, which wasn't obvious to me.
  • emschwartz 7 hours ago
    > In Deno Sandbox, secrets never enter the environment. Code sees only a placeholder

    > The real key materializes only when the sandbox makes an outbound request to an approved host. If prompt-injected code tries to exfiltrate that placeholder to evil.com? Useless.

    That seems clever.

    • ptx 4 hours ago
      Yes... but...

      Presumably the proxy replaces any occurrence of the placeholder with the real key, without knowing anything about the context in which the key is used, right? Because if it knew that the key was to be used for e.g. HTTP basic auth, it could just be added by the proxy without using a placeholder.

      So all the attacker would have to do then is find and endpoint (on one of the approved hosts, granted) that echoes back the value, e.g. "What is your name?" -> "Hello $name!", right?

      But probably the proxy replaces the real key when it comes back in the other direction, so the attacker would have to find an endpoint that does some kind of reversible transformation on the value in the response to disguise it.

      It seems safer and simpler to, as others have mentioned, have a proxy that knows more about the context add the secrets to the requests. But maybe I've misunderstood their placeholder solution or maybe it's more clever than I'm giving it credit for.

      • booi 4 hours ago
        Where would this happen? I have never seen an API reflect a secret back but I guess it's possible? perhaps some sort of token creation endpoint?
        • ptx 3 hours ago
          How does the API know that it's a secret, though? That's what's not clear to me from the blog post. Can I e.g. create a customer named PLACEHOLDER and get a customer actually named SECRET?
        • mananaysiempre 3 hours ago
          Say, an endpoint tries to be helpful and responds with “no such user: foo” instead of “no such user”. Or, as a sibling comment suggests, any create-with-properties or set-property endpoint paired with a get-propety one also means game over.

          Relatedly, a common exploitation target for black-hat SEO and even XSS is search pages that echo back the user’s search request.

        • tptacek 3 hours ago
          It depends on where you allow the substitution to occur in the request. It's basically "the big bug class" you have to watch out for in this design.
        • Tepix 3 hours ago
          HTTP Header Injection or HTTP Response Splitting is a thing.
      • sothatsit 1 hour ago
        Could the proxy place further restrictions like only replacing the placeholder with the real API key in approved HTTP headers? Then an API server is much less likely to reflect it back.
        • tptacek 1 hour ago
          It can, yes. (I don't know how Deno's work, but that's how ours works.)
    • motrm 6 hours ago
      Reminds me a little of Fly's Tokenizer - https://github.com/superfly/tokenizer

      It's a little HTTP proxy that your application can route requests through, and the proxy is what handles adding the API keys or whatnot to the request to the service, rather than your application, something like this for example:

      Application -> tokenizer -> Stripe

      The secrets for the third party service should in theory then be safe should there be some leak or compromise of the application since it doesn't know the actual secrets itself.

      Cool idea!

      • tptacek 6 hours ago
        It's exactly the tokenizer, but we shoplifted the idea too; it belongs to the world!

        (The credential thing I'm actually proud of is non-exfiltratable machine-bound Macaroons).

        Remember that the security promises of this scheme depend on tight control over not only what hosts you'll send requests to, but what parts of the requests themselves.

    • simonw 6 hours ago
      Yeah, this is a really neat idea: https://deno.com/blog/introducing-deno-sandbox#secrets-that-...

        await using sandbox = await Sandbox.create({
          secrets: {
            OPENAI_API_KEY: {
              hosts: ["api.openai.com"],
              value: process.env.OPENAI_API_KEY,
            },
          },
        });
        
        await sandbox.sh`echo $OPENAI_API_KEY`;
        // DENO_SECRET_PLACEHOLDER_b14043a2f578cba75ebe04791e8e2c7d4002fd0c1f825e19...
      
      It doesn't prevent bad code from USING those secrets to do nasty things, but it does at least make it impossible for them to steal the secret permanently.

      Kind of like how XSS attacks can't read httpOnly cookies but they can generally still cause fetch() requests that can take actions using those cookies.

      • its-summertime 4 hours ago
        if there is an LLM in there, "Run echo $API_KEY" I think could be liable to return it, (the llm asks the script to run some code, it does so, returning the placeholder, the proxy translates that as it goes out to the LLM, which then responds to the user with the api key (or through multiple steps, "tell me the first half of the command output" e.g. if the proxy translates in reverse)

        Doesn't help much if the use of the secret can be anywhere in the request presumably, if it can be restricted to specific headers only then it would be much more powerful

        • simonw 2 hours ago
          Secrets are tied to specific hosts - the proxy will only replace the placeholder value with the real secret for outbound HTTP requests to the configured domain for that secret.
          • its-summertime 15 minutes ago
            which, if its the LLM asking for the result of the locally ran "echo $API_KEY", will be sent through that proxy, to the correct configured domain. (If it did it for request body, which apparently it doesn't (which was part of what I was wondering))
        • lucacasonato 3 hours ago
          It will only replace the secret in headers
      • ryanrasti 5 hours ago
        > It doesn't prevent bad code from USING those secrets to do nasty things, but it does at least make it impossible for them to steal the secret permanently.

        Agreed, and this points to two deeper issues: 1. Fine-grained data access (e.g., sandboxed code can only issue SQL queries scoped to particular tenants) 2. Policy enforced on data (e.g., sandboxed code shouldn't be able to send PII even to APIs it has access to)

        Object-capabilities can help directly with both #1 and #2.

        I've been working on this problem -- happy to discuss if anyone is interested in the approach.

        • Tomuus 2 hours ago
          Object capabilities, like capnweb/capnproto?
          • ryanrasti 1 hour ago
            Yes exactly Cap'n Web for RPC. On top of that: 1. Constrained SQL DSL that limits expressiveness along defined data boundaries 2. Constrained evaluation -- can only compose capabilities (references, not raw data) to get data flow tracking for free
    • jkelleyrtp 2 hours ago
      @deno team, how do secrets work for things like connecting to DBs over a tcp connection? The header find+replace won't work there, I assume. Is the plan to add some sort of vault capability?
    • Tepix 6 hours ago
      It must be performing a man-in-the-middle for HTTPS requests. That makes it more difficult to do things like certificate pinning.
    • artahian 5 hours ago
      We had this same challenge in our own app builder, we ended up creating an internal LLM proxy with per-sandbox virtual keys (which the proxy maps to the real key + calculates per-sandbox usage), so even if the sandbox leaks its key it doesn't impact anything else.
    • perfmode 7 hours ago
      I was just about to say the same thing. Cool technique.
    • CuriouslyC 6 hours ago
      This is an old trick that people do with Envoy all the time.
    • verdverm 6 hours ago
      Dagger has a similar feature: https://docs.dagger.io/getting-started/types/secret/

      Same idea with more languages on OCI. I believe they have something even better in the works, that bundles a bunch of things you want in an "env" and lets you pass that around as a single "pointer"

      I use this here, which eventually becomes the sandbox my agent operates in: https://github.com/hofstadter-io/hof/blob/_next/.veg/contain...

    • linolevan 6 hours ago
      It’s pretty neat.

      Had some previous discussion that may be interesting on https://news.ycombinator.com/item?id=46595393

    • rfoo 6 hours ago
      I like this, but the project mentioned in the launch post

      > via an outbound proxy similar to coder/httpjail

      looks like AI slop ware :( I hope they didn't actually run it.

      • lucacasonato 4 hours ago
        We run or own infrastructure for this (and everything else). The link was just an illustrative example
  • messh 2 minutes ago
    shellbox.dev could be considered a cheaper alternative:

    full Linux VMs for $0.02/hr. 2 vCPUs / 4GB RAM / 50GB disk, with web and email endpoints.

    SSH in, and it auto-suspends when you disconnect (or keep it alive if needed)

  • johnspurlock 7 hours ago
    "Over the past year, we’ve seen a shift in what Deno Deploy customers are building: platforms where users generate code with LLMs, and that code runs immediately without review. That code frequently calls LLMs itself, which means it needs API keys and network access.

    This isn’t the traditional “run untrusted plugins” problem. It’s deeper: LLM-generated code, calling external APIs with real credentials, without human review. Sandboxing the compute isn’t enough. You need to control network egress and protect secrets from exfiltration.

    Deno Sandbox provides both. And when the code is ready, you can deploy it directly to Deno Deploy without rebuilding."

    • twosdai 7 hours ago
      Like the emdash, whenever I read: "this isn't x it's y" my dumb monkey brain goes "THATS AI" regardless if it's true or not.
      • bangaladore 3 hours ago
        Another common tell nowadays is the apostrophe type (’ vs ').

        I don't know personally how to even type ’ on my keyboard. According to find in chrome, they are both considered the same character, which is interesting.

        I suspect some word processors default to one or the other, but it's becoming all too common in places like Reddit and emails.

        • int_19h 1 hour ago
          Word (you know, the most popular word processor out there) will do that substitution. And on macOS & iOS, it's baked into the standard text input widgets so it'll do that basically everywhere that is a rich text editor.
      • lucacasonato 7 hours ago
        I can confirm Ryan is a real human :)
        • zamadatix 6 hours ago
          Is there a chance you could ask Ryan if he had an LLM write/rewrite large parts of this blog post? I don't mind at all if he did or didn't in itself, it's a good and informative post, but I strongly assumed the same while reading the article and if it's truly not LLM writing then it would serve as a super useful indicator about how often I'm wrongly making that assumption.
          • bonsai_spool 5 hours ago
            There are multiple signs of LLM-speak:

            > Over the past year, we’ve seen a shift in what Deno Deploy customers are building: platforms where users generate code with LLMs and that code runs immediately without review

            This isn't a canonical use of a colon (and the dependent clause isn't even grammatical)!

            > This isn’t the traditional “run untrusted plugins” problem. It’s deeper: LLM-generated code, calling external APIs with real credentials, without human review.

            Another colon-offset dependent paired with the classic, "This isn't X. It's Y," that we've all grown to recognize.

            > Sandboxing the compute isn’t enough. You need to control network egress and protect secrets from exfiltration.

            More of the latter—this sort of thing was quite rare outside of a specific rhetorical goal of getting your reader excited about what's to come. LLMs (mis)use it everywhere.

            > Deno Sandbox provides both. And when the code is ready, you can deploy it directly to Deno Deploy without rebuilding.

            Good writers vary sentence length, but it's also a rhetorical strategy that LLMs use indiscriminately with no dramatic goal or tension to relieve.

            'And' at the beginning of sentences is another LLM-tell.

            • r00f 4 hours ago
              Can it be that after reading so many LLM texts we will just subconciously follow the style, because that's what we are used to? No idea how this works for native English speakers, but I know that I lack my own writing style and it is just a pseudo-llm mix of Reddit/irc/technical documentation, as those were the places where I learned written English
              • bonsai_spool 4 hours ago
                Yes, I think you're right—I have a hard time imagining how we avoid such an outcome. If it matters to you, my suggestion is to read as widely as you're able to. That way you can at least recognize which constructions are more/less associated with an LLM.

                When I was first working toward this, I found the LA Review of Books and the London Review of Books to be helpful examples of longform, erudite writing. (edit - also recommend the old standards of The New Yorker and The Atlantic; I just wanted to highlight options with free articles).

                I also recommend reading George Orwell's essay Politics and the English Language.

            • jonny_eh 4 hours ago
              > It’s deeper: LLM-generated code, calling external APIs with real credentials, without human review.

              This also follows the rule of 3s, which LLMs love, there ya go.

              • johnfn 4 hours ago
                Yeah, I feel like this is really the smoking gun. Because it's not actually deeper? An LLM running untrusted code is not some additional level of security violation above a plugin running untrusted code. I feel like the most annoying part of "It's not X, it's Y" is that agents often say "It's not X, it's (slightly rephrased X)", lol, but it takes like 30 seconds to work that out.
                • jonny_eh 2 hours ago
                  It's not just different way of saying something, it's a whole new way to express an idea.
            • tadfisher 4 hours ago
              It's unfortunate that, given the entire corpus of human writing, LLMs have seemingly been fine-tuned to reproduce terrible ad copy from old editions of National Geographic.

              (Yes, I split the infinitive there, but I hate that rule.)

          • javier123454321 5 hours ago
            As someone that has a habit of maybe overusing em dashes to my detriment, often times, and just something that I try to be mindful of in general. This whole thing of assuming that it's AI generated now is a huge blow. It feels like a personal attack.
            • zamadatix 3 hours ago
              "—" has always seemed like an particularly weak/unreliable signal to me, if it makes you feel any better. Triply so in any content one would expect smart quotes or formatted lists, but even in general.

              RIP anyone who had a penchant for "not just x, but y" though. It's not even a go-to wording for me and I feel the need to rewrite it any time I type it out of fear it'll sound like LLMs.

              • zbentley 2 hours ago
                > RIP anyone who had a penchant for "not just x, but y" though

                I felt that. They didn’t just kidnap my boy; they massacred him.

          • calebhwin 5 hours ago
            [dead]
      • Bnjoroge 3 hours ago
        couldnt agree more. It's frankly very fatiguing
    • pton_xd 53 minutes ago
      What I don't understand -- why put your name on LLM marketing copy? Add an extra tag of authenticity to the message? For me, it does the opposite.
  • Soerensen 1 hour ago
    The secrets placeholder design is the right trade-off for this use case. You're essentially accepting that malicious code can still use your API keys for their intended purpose - the goal is preventing permanent exfiltration.

    The interesting attack surface that emerges: any endpoint on your approved hosts that reflects input back in responses. Error messages, search pages, create-then-read flows. The thread already covers this, but practically speaking, most API providers have learned to sanitize these paths after years of debugging sensitive token leaks in logs.

    For anyone evaluating this vs. rolling your own: the hard part isn't the proxy implementation, it's maintaining the allow-list as your agent's capabilities grow and making sure your secret substitution rules are tight enough to catch edge cases.

    • rob 1 hour ago
      I feel like this is a bot account. Or at least, everything is AI generated. No posts at all since the account was created in 2024 and now suddenly in the past 24 hours there's dozens of detailed comments that all sort of follow the same pattern/vibe.
  • chacham15 32 minutes ago
    I am so confused at how this is supposed to work. If the code, running in whatever language, does any sort of transform with the key that it thinks it has, doesnt this break? E.g. OAuth 1 signatures, JWTs, HMACs...

    Now that I think further, doesnt this also potentially break HTTP semantics? E.g. if the key is part of the payload, then a data.replace(fake_key, real_key) can change the Content Length without actually updating the Content-Length header, right?

    Lastly, this still doesnt protect you from other sorts of malicious attacks (e.g. 'DROP TABLE Users;')...Right? This seems like a mitigation, but hardly enough to feel comfortable giving an LLM direct access to prod, no?

  • yakkomajuri 3 hours ago
    Secret placeholders seems like a good design decision.

    So many sandbox products these days though. What are people using in production and what should one know about this space? There's Modal, Daytona, Fly, Cloudflare, Deno, etc

    • ATechGuy 1 hour ago
      These are all wrappers around VMs. You could DIY these easily by using EC2/serverless/GCP SDKs.
    • ushakov 3 hours ago
      Factory, Nvidia, Perplexity and Manus are using E2B in production - we ran more than 200 million Sandboxes for our customers
  • koolala 5 hours ago
    The free plan makes me want to use it like Glitch. But every free service like this ever has been burned...
  • ttoinou 7 hours ago
    What happens if we use Claude Pro or Max plans on them ? It’ll always be a different IP connecting and we might get banned from Anthropic as they think we’re different users

    Why limit the lifetime on 30 mins ?

    • lucacasonato 7 hours ago
      We'll increase the lifetime in the next weeks - just some tech internally that needs to be adjusted first.
    • paxys 2 hours ago
      What's the use case for this? Trying to get raw API access through a monthly plan? Or something else?
    • mrkurt 5 hours ago
      For what it's worth, I do this from about 50 different IPs and have had no issues. I think their heuristics are more about confirming "a human is driving this" and rejecting "this is something abusing tokens for API access".
      • ttoinou 5 hours ago
        All the time with the same computer ? Maybe it is looking at others metadata, for example local MAC addresses
        • mrkurt 5 hours ago
          All the time with a bunch of different sandboxes.
  • ATechGuy 6 hours ago
    > allowNet: ["api.openai.com", "*.anthropic.com"],

    How to know what domains to allow? The agent behavior is not predefined.

    • CuriouslyC 5 hours ago
      The idea is to gate automatic secret replacement to specific hosts that would use them legitimately to avoid exfiltration.
    • falcor84 5 hours ago
      Well, this is the hard part, but the idea is that if you're working with both untrusted inputs and private data/resources, then your agent is susceptible to the "lethal trifecta"[0], and you should be extremely limiting in its ability to have external network access. I would suggest starting with nothing beyond the single AI provider you're using, and only add additional domains if you are certain you trust them and can't do without them.

      [0] https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

  • WatchDog 2 hours ago
    If you achieve arbitrary code execution in the sandbox, I think you could pretty easily exfiltrate the openai key by using the openai code interpreter, and asking it to send the key to a url of your choice.
  • dangoodmanUT 4 hours ago
    Love their network filtering, however it definitely lacks some capabilities (like the ability to do direct TCP connections to Postgres, or direct IP connections.

    Those limitations from other tools was exactly why I made https://github.com/danthegoodman1/netfence for our agents

  • nihakue 6 hours ago
    See also Sprites (https://news.ycombinator.com/item?id=46557825) which I've been using and really enjoying. There are some key architecture differences between the two, but very similar surface area. It'll be interesting to see if ephemeral + snapshots can be as convenient as stateful with cloning/forking (which hasn't actually dropped yet, although the fly team say it's coming).

    Will give these a try. These are exciting times, it's never been a better time to build side projects :)

    • tomComb 1 hour ago
      Yes, sprites looks great too – would certainly be interested in a comparison.
    • alooPotato 4 hours ago
      what are the key architectural differences?
  • zenmac 5 hours ago
    >Deno Sandbox gives you lightweight Linux microVMs (running in the Deno Deploy cloud)

    The real question is can the microVMs run in just plain old linux, self-hosted.

    • echelon 5 hours ago
      Everyone wants to lock you in.

      Unfortunately there's no other way to make money. If you're 100% liberally licensed, you just get copied. AWS/GCP clone your product, offer the same offering, and they take all the money.

      It sucks that there isn't a middle ground. I don't want to have to build castles in another person's sandbox. I'd trust it if they gave me the keys to do the same. I know I don't have time to do that, but I want the peace of mind.

      • ushakov 5 hours ago
        we have 100% open-source Sandboxes at E2B

        git: https://github.com/e2b-dev/infra

        wiki: https://deepwiki.com/e2b-dev/infra

        • echelon 4 hours ago
          This is what I like to see!

          Not sure what your customers look like, but I'd for one also be fine with "fair source" licenses (there are several - fair source, fair code, Defold license, etc.)

          These give customers 100% control but keep Amazon, Google, and other cling-on folks like WP Engine from reselling your work. It avoids the Docker, Elasticsearch, Redis fate.

          "OSI" is a submarine from big tech hyperscalers that mostly take. We should have gone full Stallman, but fair source is a push back against big tech.

          • ushakov 3 hours ago
            we aren’t worried about that.

            when we were starting out we figured there was no solution that would satisfy our requirements for running untrusted code. so we had to build our own.

            the reason we open-sourced this is because we want everyone to be able to run our Sandboxes - in contrast to the majority of our competitors who’s goal is to lock you in to their offering.

            with open-source you have the choice, and luckily Manus, Perplexity, Nvidia choose us for their workloads.

            (opinions my own)

  • Tepix 6 hours ago
    If you can create a deno sandbox from a deno sandbox, you could create an almost unkillable service that jumps from one sandbox to the next. Very handy for malicious purposes. ;-)

    Just an idea…

    • mrkurt 5 hours ago
      This is, in fact, the biggest problem to solve with any kind of compute platform. And when you suddenly launch things really, really fast, it gets harder.
    • runarberg 6 hours ago
      Isn’t that basically how zip-bombs work?
      • kibibu 4 hours ago
        Not really, no
  • Bnjoroge 3 hours ago
    Ignoring the fact that most of the blog post is written by an LLM, I like that they provide a python sdk. I dont believe vercel does for their sandbox product.
  • mrpandas 6 hours ago
    Where's the real value for devs in something like this? Hasn't everyone already built this for themselves in the past 2 years? I'm not trying to sound cheeky or poo poo the product, just surprised if this is a thing. I can never read what's useful by gut anymore, I guess.
    • slibhb 6 hours ago
      > Hasn't everyone already built this for themselves in the past 2 years?

      Even if this was true, "everyone building X independently" is evidence that one company should definitely build X and sell it to everyone

    • mrkurt 5 hours ago
      Sandboxes with the right persistence and http routing make excellent dev servers. I have about a million dev servers I just use from whatever computer / phone I happen to be using.

      It's really useful to just turn a computer on, use a disk, and then plop its url in the browser.

      I currently do one computer per project. I don't even put them in git anymore. I have an MDM server running to manage my kids' phones, a "help me reply to all the people" computer that reads everything I'm supposed to read, a dumb game I play with my son, a family todo list no one uses but me, etc, etc.

      Immediate computers have made side projects a lot more fun again. And the nice thing is, they cost nothing when I forget about them.

      • simonw 5 hours ago
        I'd love to know more about that "help me reply to all the people" one! I definitely need that.
        • mrkurt 5 hours ago
          You will be astonished to know it'a a whole lot of sqlite.

          Everything I want to pay attention to gets a token, the server goes and looks for stuff in the api, and seeds local sqlites. If possible, it listens for webhooks to stay fresh.

          Mostly the interface is Claude code. I have a web view that gives me some idea of volume, and then I just chat at Claude code to have it see what's going on. It does this by querying and cross referencing sqlite dbs.

          I will have claude code send/post a response for me, but I still write them like a meatsack.

          It's effectively: long lived HTTP server, sqlite, and then Claude skills for scripts that help it consistently do things based on my awful typing.

    • falcor84 6 hours ago
      > Hasn't everyone already built this for themselves in the past 2 years?

      The short answer is no. And more so, I think that "Everyone I know in my milieu already built this for themselves, but the wider industry isn't talking about it" is actually an excellent idea generator for a new product.

      • ATechGuy 5 hours ago
        In the last one year, we have seen several sandboxing wrappers around containers/VMs and they all target one use case AI agent code execution. Why? perhaps because devs are good at building (wrappers around VMs) and chase the AI hype. But how are these different and what value do they offer over VMs? Sounds like a tarpit idea, tbh.

        Here's my list of code execution sandboxing agents launched in the last year alone: E2B, AIO Sandbox, Sandboxer, AgentSphere, Yolobox, Exe.dev, yolo-cage, SkillFS, ERA Jazzberry Computer, Vibekit, Daytona, Modal, Cognitora, YepCode, Run Compute, CLI Fence, Landrun, Sprites, pctx-sandbox, pctx Sandbox, Agent SDK, Lima-devbox, OpenServ, Browser Agent Playground, Flintlock Agent, Quickstart, Bouvet Sandbox, Arrakis, Cellmate (ceLLMate), AgentFence, Tasker, DenoSandbox, Capsule (WASM-based), Volant, Nono, NetFence

        • kommunicate 3 hours ago
          don't forget runloop!
        • ushakov 4 hours ago
          why? because there’s a huge market demand for Sandboxes. no one would be building this if no one would be buying.

          disclaimer: i work at E2B

          • ATechGuy 4 hours ago
            I'm not saying sandboxes are not needed, I'm saying VMs/containers already provide the core tech and it's easy to DIY a sandbox. Would love to understand what value E2B offers over VMs?
            • kommunicate 3 hours ago
              making a local sandbox using docker is easy, but making them work at high volume and low latency is hard
            • ushakov 4 hours ago
              we offer secure cloud VMs that scale up to 100k concurrent instances or more.

              the value we sell with our cloud is scale, while our Sandboxes are a commodity that we have proudly open-sourced

              • ATechGuy 4 hours ago
                > we offer secure cloud VMs that scale up to 100k concurrent instances or more.

                High scalability and VM isolation is what the Cloud (GCP/AWS, that E2B runs on) offers.

    • drewbitt 6 hours ago
      Has everyone really built their own microVMs? I don’t think so.
      • zenmac 5 hours ago
        Saw quite bit on HN.

        A quick search this popped up:

        https://news.ycombinator.com/item?id=45486006

        If we can spin up microVM so quickly, why bother with Docker or other containers at all?

        • drewbitt 5 hours ago
          I think a 413 commit repo took a bit of time.
          • mrpandas 5 hours ago
            That's just over one day worth of commits in a few friends' activity at this point. Thanks to Anthropic.
        • ushakov 4 hours ago
          10 seconds is actually not that impressive. we spin up Sandboxes around 50-200ms at E2B
  • snehesht 6 hours ago
    50/200 Gb free plus $0.5 / Gb out egress data seems expensive when scaling out.
  • MillionOClock 5 hours ago
    Can this be used on iOS somehow? I am building a Swift app where this would be very useful but last time I checked I don't think it was possible.
    • lucacasonato 4 hours ago
      It’s a cloud service - so you can call out to it from anywhere you want. Just don’t ship your credentials in the app itself, and instead authenticate via a server you control.
  • e12e 7 hours ago
    Looks promising. Any plans for a version that runs locally/self-host able?

    Looks like the main innovation here is linking outbound traffic to a host with dynamic variables - could that be added to deno itself?

  • LAC-Tech 5 hours ago
    As a bit of an aside, I've gotten back into deno after seeing bun get bought out by an AI company.

    I really like it. Startup times are now better than node (if not as good as bun). And being able to put your whole "project" in a single file that grabs dependencies from URLs reduces friction a surprising amount compared to having to have a whole directory with package.json, package-lock.json, etc.

    It's basically my "need to whip up a small thing" environment of choice now.

  • latexr 4 hours ago
    > evil.com

    That website does exist. It may hurt your eyes.

    • lucacasonato 4 hours ago
      We honestly should have just linked to oracle.com instead of evil.com
  • ianberdin 7 hours ago
    Firecrackervm with proxy?
  • eric-burel 4 hours ago
    Can it be used to sandbox an AI agent, like replacing eg Cursor or Openclaw sandboxing system?
  • EGreg 3 hours ago
    We already have a pretty good sandbox in our platform: https://github.com/Qbix/Platform/blob/main/platform/plugins/...

    It uses web workers on a web browser. So is this Deno Sandbox like that, but for server? I think Node has worker threads.

  • bopbopbop7 4 hours ago
    Now I see why he was on twitter saying that the era of coding is over and hyping up LLMs, to sell more shovels...
  • andrewmcwatters 7 hours ago
    [dead]