22 comments

  • sho 3 hours ago
    I am no-where near as concerned by this as I was a year ago, when I was expecting the axe to fall at any moment before the Chinese labs achieved some sort of escape velocity. I now think it's too late, all the cats are out of all the bags, there's no moat except maybe a temporal one of a few months, the genie is out of the bottle.

    There is no secret sauce the US labs have that the Chinese ones don't, or won't have soon enough. Deepseek 4 and Kimi 2.5 are not quite Claude 4.5/GPT5.5 but there's no fundamental principle missing - they are strong evidence that there's no real advantage the "frontier" labs possess that isn't related to scale, which they will gain in time (if they even need to). The RL post-training techniques that work are widely known and easily copied. All Deepseek is really lacking is data, which they're getting - and the harder Anthropic/the USG makes it to access claude in china, the more of that precious data they'll get!

    I used to sort of entertain the "fast take-off breakaway" scenario as being plausible but not really anymore. The only genuine moat the frontier labs have is their product take-up, which isn't nothing, far from it, but it's not some unbreakable technological wall. Too late guys - it might have been too late for quite some time.

    • gpt5 2 hours ago
      I wish it was true. I would gladly use a GPT 5.2 high model equivalent for coding (6 months old) if it was offered cheaper by Deepseek or Kimi. And I'm sure that's an extremely prevalent opinion by the millions of Claude and Codex users who are bothered by the costs.

      However, they just don't perform that well in practice. That's the real issue. You can actually see it when you move away from open benchmarks. Deep seek 3.2 is 4% on Arc-AGI 2 [1], while GPT 5.2 high is 52% and GPT 5.5 pro high is 84.6%. That's the real reason why nobody is using these models for serious work. It's incredibly frustrating.

      In addition, I already feel the pain myself on the model restriction. I'll asking my codex 5.5 agent to crawl a website - BOOM, cybersecurity warning on my account. I'll ask it to fix SSH on my local network - another warning. I'm worried about the day my account would be randomly banned and I cannot create a new one. OpenAI already asks you to perform full identification in order to eliminate these warnings - probably exactly for that - so that if they ban you, it's permanent.

      [1] https://arcprize.org/leaderboard

      • usernametaken29 16 minutes ago
        I worked extensively on ARC AGI before and one thing is SURE as hell. OpenAI and Gemini in particular use this as marketing material. You can correlate the benchmark release with stock price increase. They feed synthetic datasets of ARC into their models to boost the numbers. There is no doubt in my mind Gemini is no better than DeepSeek other than being specifically fine tuned for ARC AGI. Heck, they even say so and they say they have paid annotations for ARC. Again, economic incentives. In terms of whether these models are actually better at the benchmarks, likely not. See ARC 3, where the gap is diminishingly small.
        • gpt5 3 minutes ago
          ARC-AGI isn't perfect, but it helps demonstrates the gap. I'm sure all companies optimize their models for this benchmark given its dominance.
      • sho 2 hours ago
        I 100% agree with you, but I've been convinced over the last year that it's a time and scale issue, not anything fundamental.

        The Chinese models right now are in a weird spot. Compared to the frontiers, both their pre and post training is woeful - tiny, resource constrained in every dimension including human, slow. I'd compare it to OpenAI 5 years ago except I think even then OpenAI had way more!

        But they "cheat" quite a lot in distillation and very benchmark-focussed RL and that's where you get this superficial quality in the leaderboards that doesn't match up when you go off-script. Arc is a great example in that it really belies an "inferior soul" at the heart of it all.

        What gives me great hope though is that those same scaling laws that Altman and others have been hyping forever will absolutely kick in for the Chinese labs just as they did for the US ones, and I don't think anything can stop that process now. So they will catch up. It won't be tomorrow, but it's not going to be 10 years either. 3-5 would be my reasonably educated guess.

        And the final risk, that China itself might try to restrict availability of the tsunami of GPU or other AI hardware it will inevitably produce - well, I just can't really imagine a country that has been configuring itself for the last 40 years as a single purpose export machine deciding that actually, no, it doesn't want to export something.

        About the model restrictions - absolutely. I've been trying to do security research on my own software and the frontier models immediately get suspicious. I've been playing with the local ones much more this year basically because of this. They have deficiencies, for sure - they feel very "hollow" compared to the major labs. But I've talked to a lot of people, and the consensus is pretty clear - just a matter of time.

        • flir 10 minutes ago
          Just an observation: constraints often result in creative solutions. I wouldn't be surprised if a smaller lab makes a big breakthrough because they have to.
      • applfanboysbgon 1 hour ago
        > Deep seek 3.2 is 4% on Arc-AGI 2

        Why are you bringing up an outdated Chinese model from 6 months ago to compare to a US model from 6 months ago? The outdated Chinese model will have performance from ~12 months ago, obviously. But today's Chinese model DeepSeek 4 has performance not far from the US model 6 months ago; 46% compared to 52% from 5.2.

        • gpt5 7 minutes ago
          Because Deepseek 4.0 is not yet there, but the jump isn't expected to be large. Kimi 2.5 is there and is also scoring low.
      • ageitgey 1 hour ago
        Have you tried the latest DeepSeek v4 Pro inside of the Claude Code harness? It's not listed in that site.

        It definitely 'feels like' it is as good as Claude for many regular web app coding tasks (though I don't have real benchmarks). And it is comically cheap.

        I'm not suggesting it is better than the latest Claude or codex models, but it seems 'good enough' for a lot of use cases in my limited real world testing.

        • omnimus 1 hour ago
          Also so many developers i know use LLMs for one shoting isolated problems, explainers, discussions and planning. For these even Kimi is pretty great.

          I don't think every dev will be comfortable just releasing claude on their project.

      • otabdeveloper4 1 hour ago
        And yet Claude six months ago was amazing and good enough for you.

        This shows that AI cloud consumption is just a conspicuous consumption status symbol, nobody knows why they need cloud AI or what problem they are even solving.

    • hbarka 1 hour ago
      Harness engineering is a moat. There’s user loyalty and reliance on the chassis that Claude is on, for example, just like there’s more market share by MacOS+WindowsOS over Linux Open Source.
      • kasey_junk 9 minutes ago
        I regularly switch between codex and Claude in the same sessions. I’d throw in other models if I could.

        Data governance and enterprise sales is a moat. The harnesses aren’t.

      • ElFitz 50 minutes ago
        I thought so too.

        But 1) people use other models with that same harness. 2) I moved on from Claude Code and all the features I cared for up and running in less than a couple days. Without even looking for available plugins or extensions.

    • nojs 21 minutes ago
      What about access to GPUs and memory? This is becoming a pretty major bottleneck.
      • asdff 10 minutes ago
        Everyone is expecting them to invade Taiwan, but why not merely extort Taiwan?
    • ElFitz 54 minutes ago
      > The only genuine moat the frontier labs have is their product take-up

      And even then, their is no stickiness. For most use cases there isn’t much value in one frontier model over the other.

      Just have to look at the people flocking from one to the other for whatever reason.

      • baq 37 minutes ago
        I’m flocking from GPT to opus every week for the past 3 months and always come back.

        The point isn’t that gpt is better, it’s that it is so much better for my work it isn’t even sticky, it’s reinforced concrete. I use opus 1% of the time because it writes better and it’s sticky there.

        Yes I’ll switch approximately immediately if opus or Gemini (which I use more than opus!) is better for what I do, but at this point frontier model tokens are not fungible.

        • ElFitz 24 minutes ago
          There will always be dataset and training quirks, and the provider’s own biases and focus, granting one model an edge over the others in some specific domain.
          • baq 23 minutes ago
            Yup and that’s where the moats are.
      • dotancohen 49 minutes ago
        The large AI houses arguably ensure that model switching be a natural action for their clients, by switching the default model of their flagship offerings every few months. Such is the price of progress.
    • BrtByte 2 hours ago
      I agree the genie is out of the bottle technologically. I'm less convinced that means access stops being politically and economically important. The bottle may be gone but the best lamps are still expensive
      • trollbridge 2 hours ago
        But a “good enough” lamp just got a lot cheaper. The cost of tokens on DeepSeek V4 Pro is so low I don’t even think about and currently am trying to figure out useful things for as many agents simultaneously running as I can. What would have cost $150 less than a year ago now costs 35¢.

        Likewise Qwen 3.6 absolutely blows me away and that’s on a 35b 6-bit model on a local 5090. Same thing, busy trying to find stuff to do to keep it busy 24/7.

        I can still find some niches for Opus 4.7 but being able to attack problems and not worry about consumption is a game changer.

      • jorvi 2 hours ago
        Virtually no one is going to pay for the best performing lamp if the next best lamp does 90% as good for an order of magnitude cheaper.

        I will say, as pointed out by others, DeepSeek and other Chinese providers still lack a bit in the tooling that Claude has, but they'll get there.

        • baq 34 minutes ago
          And yet it seems that 90% are happily paying for the marginal 10% capability and saturate datacenters.
          • lugu 2 minutes ago
            That is called marketing.
        • BrtByte 2 hours ago
          If the second-best lamp is 90% as good and 10x cheaper, most people will use the second-best lamp...
          • avazhi 2 hours ago
            That’s what he said?
    • shevy-java 2 hours ago
      > There is no secret sauce the US labs have that the Chinese ones don't, or won't have soon enough

      This is not just about mainland China though. The current US government is extremely selfish and self-centered. Other countries really need to consider for their own long-term situation here.

  • terrib1e 3 hours ago
    No mention of open weights anywhere in the piece, which is weird. Qwen, Llama, DeepSeek are months behind frontier, not years. If you're a European startup worried about getting cut off from Anthropic's API in 2027, the real question is what the open-weight frontier looks like then. Probably pretty capable. That undercuts most of the doom scenario.

    Also, he concedes Mythos-level capabilities will be cheap next year, then handwaves it with "you need the best AI, not good-enough AI." For most use cases, frontier minus six months is fine.

    • rTX5CMRXIfFG 3 hours ago
      Affordability of hardware that can run local LLMs is a real factor, too. Not sure when RAM prices are going down, but with everything that’s happening and can happen in the world right now, it doesn’t look like it’ll drop in the near or medium-term
      • wahnfrieden 3 hours ago
        No one is going to run models that are comparable to frontier locally without spending enormous sums for use at scale or in large orgs. Even with cheap RAM, you will still need a very large budget for frontier-level capability.

        Open models that are competitive with frontier will be used on shared hosts.

        • zozbot234 1 hour ago
          > No one is going to run models that are comparable to frontier locally without spending enormous sums for use at scale

          You can always run these models cheaper locally if you're willing to compromise on total throughput and speed of inference. For most end-user or small-scale business needs, you don't really need a lot of either.

          • 9dev 45 minutes ago
            It would be awful if running models locally became the primary way of using LLMs. On dedicated servers sharing GPUs across requests, energy usage and environmental impact is way lower overall than if everyone and their mother suddenly needs beefy GPUs. It’s the equivalent of everyone commuting alone in their own car instead of a train picking up hundreds at once.
            • zozbot234 33 minutes ago
              You can batch requests when running locally too, if you're using a model with low-enough requirements for KV-cache; essentially targeting the same resource efficiencies that the big providers rely on. This is useful since it gives you more compute throughput "for free" during decode, even when running on very limited hardware.
        • jorvi 2 hours ago
          Models have been capped out on training and (active) parameters a while ago, its tooling / harness that is making the big jumps in performance happen. And then you have things like DeepSeek with a pretty small KV cache.

          And with the extreme chip shortages for the next two years, there's little appetite for even bigger models anyway.

          Barring a breakthrough in scaling, the only direction the models can really go is smaller. Which will inevitably mean better performing local models for same chip budget.

    • pu_pe 48 minutes ago
      There are two problems with that scenario:

      1. Your European startup will be competing with others using a much better frontier model. In a scenario where you already have other major disadvantages (access to capital, labor), you might be outcompeted

      2. Open models have been keeping pace very nicely, but they rely on distillation of frontier models. If the race gets really tight, this could be affected so that the time gap grows larger (ie, it's very unlikely anyone but Anthropic is distilling from Mythos at the moment)

    • baq 32 minutes ago
      Open weights will remain open only if they’re significantly worse than the frontier weights.

      Before you challenge with benchmarks, consider the labs which release open weight models have internal testing and unpublished results.

    • BrtByte 2 hours ago
      Open weights undercut the absolute cutoff scenario. They don't fully solve the question of who gets the best model first, who gets enough tokens to use it heavily, and who gets to integrate it into sensitive workflows without waiting for permission
    • cubefox 1 hour ago
      Someone recently made a graph showing that the gap between US American frontier LLMs and Chinese open weight LLMs (including DeepSeek v4) is widening. Unfortunately I can't find it anymore.

      Update: GPT-5.5 found it.

      Article: https://www.nist.gov/news-events/news/2026/05/caisi-evaluati...

      Graph: https://www.nist.gov/sites/default/files/images/2026/05/01/1...

    • wahnfrieden 3 hours ago
      Llama is not months behind GPT 5.5 Pro. I don't think Qwen or DeepSeek are either.

      edit: I'm specifically referring to the "5.5 Pro" model, not regular 5.5 with Pro tier subscription. Claude has no model available that's comparable to 5.5 Pro either.

      • vasachi 2 hours ago
        I’ve used DeepSeek 4 Pro through Claude. It’s fine. Plans are similar to what sonnet/opus make. Same massage-the-plan -> massage-the-code loop. Maybe the code is a bit worse, but that’s the “months behind” thing.

        The thing is, vast majority of code tasks aren’t a venture into the unknown. We as an industry for the most part build CRUD interfaces and dashboards. That can be achieved, with supervision, with frontier open-weights models quite well.

        • fwipsy 2 hours ago
          I think maybe you are both right. Perhaps AI coding assistants just don't need to be all that smart in many cases, so open weights models are fine. At the same time, frontier models are advancing in other domains, like mathematics, where raw intelligence is a more important factor.
          • vasachi 2 hours ago
            I can’t compare raw intelligence of these models, and I certainly can’t say anything about their advances in mathematics (without repeating press releases). But, erm, does it really matter? It’s not like some engineer somewhere will vibe-calculate how much weight a bridge can hold.

            Well, yes, someone probably will do that. But I’m pretty sure there will be consequences for the engineer errors in this vibe-calculations.

    • sholladay 3 hours ago
      Open models are pretty good at this point but the problem is that they are limited by the tooling and infrastructure that surrounds them. For example, the last time I tried to set up web search with an open model, the experience was pretty bad.
  • mc-serious 2 minutes ago
    Open-Source will handle access to models, someone will find a way. Security by obfuscation has never worked.
  • Animats 29 minutes ago
    Over on the image generation side, "frontier AI" seems to be coming along rather well. Watch this video, which was released eight days ago.[1] Can you find any flaws? Two years ago, just getting hands with the right number of fingers was tough. Last year, there were jarring errors in every scene. Now, very little is wrong. How much longer will anyone need Hollywood studios?

    [1] https://www.youtube.com/watch?v=4zTCLIhScCM

    • piker 0 minutes ago
      Still in the uncanny valley for me. Like watching AI the film. That said, it’s 3-minutes long and maintains the setting across many different angles, zoom levels, etc. pretty impressive.
    • _diyar 25 minutes ago
      > Can you find any flaws

      Physics.

  • pu_pe 55 minutes ago
    The more fundamental bottleneck is not even the frontier models, it's the datacenters. Let's say Europe breaks apart from the US completely tomorrow. It does not have enough datacenters (or GPUs in general) to sustain its inference needs even if it would resort to Chinese open models. And to build new datacenters, it would need to source parts from the US and China.

    In other words, if AI does have continued significant economic impact, only the US and China would be able to leverage it completely. The rest of the world is implicitly betting that AI won't be good enough, or that eventually the compute curve flattens out so using a model that is 10x larger only leads to marginal benefits.

    • davesque 4 minutes ago
      > The more fundamental bottleneck is not even the frontier models, it's the datacenters.

      Is it even though? Quantization and speculative decoding are improving the local AI story by leaps and bounds every month.

      • zozbot234 1 minute ago
        Speculative decoding is not that useful at scale, it's mostly about making local single-user inference faster. When you're batching multiple inferences together, that's already as fast as the verification you have to perform w/ speculative decoding.
  • coderenegade 3 hours ago
    The distillation risk has been brewing for a while now. In a very real sense, the model is the data, so if the data is locked down because of how valuable it is, it was only a matter of time before fully open access to the models would be revoked.

    There's also an additional economic concern that rarely gets mentioned: because no one has cracked continual learning, keeping models up-to-date and filling in gaps in performance requires retraining on an ever growing dataset. Granted, you aren't starting from scratch each time, but the scaling required just to stay relevant looks daunting.

    I don't know where any this goes on a societal level, but I've believed since the release of deepseek r1 that access to frontier models would eventually be locked up behind contracts, since the only moats protecting the models themselves are purely artificial. It remains to be seen how effective China is at pushing the envelope, and whether they are interested in providing unfettered access. And on top of that, it remains to be seen how well these models actually turn out to scale in the long run.

    • BrtByte 2 hours ago
      This is a good point, especially the "model is the data" framing
  • adrithmetiqa 2 hours ago
    Considering the economic angle, one possible long term future is that access to frontier models is only realistic for the wealthiest 1% They will use this access to the ultra intelligent models to increase their wealth further. Inequality will continue to be negatively impacted
  • digitaltrees 2 hours ago
    The thing is, the open source models are are smart enough to do most work if the harness and orchestration is right. So even if the next gen model get locked behind monopoly pay walls build Real things in the real world and fight for a humane world
    • margorczynski 5 minutes ago
      The availability of open models with such capabilities are based on the goodwill of the Chinese. And that might end eventually, especially that the matter is one decision of Xi and the party.
  • phantomathkg 1 hour ago
    Instead of soon, how about just "now"?

    I would imagine not single everyone on HN have enough disposable income that allow us to subscribe Claude Max or other similar max plan of other models without thinking.

    Some people mentioned open weight model, but there are two hurdles. One the current economic mean securing the best hardware is already stupidly expensive compare to a year or two ago. And the open weight model lack the magic that Claude/Gemini/OpenAI put in the proprietary one, meaning one will have to create their own agent that is clever enough to search the internet when it knows its training data is stale.

  • BrtByte 3 hours ago
    The uncomfortable implication is that "AI sovereignty" may end up being less about training your own GPT-class model and more about securing compute, energy, datacenter security and contractual access
    • AndroTux 18 minutes ago
      Yeah, so it's just business as usual: If you have ungodly amounts of money, you can essentially do anything, and if you don't, you can't. It's always been this way, and it'll always be this way. I don't see this as a world-ending issue.
    • chii 37 minutes ago
      It's the same as energy sovereignty.
  • evdubs 3 hours ago
    What's the likelihood that universities eventually become open model providers?
    • pjc50 7 minutes ago
      Universities are struggling to prevent their students using AI, because it makes both learning and evaluation extremely difficult.
    • baq 29 minutes ago
      Zero, but update priors when you see campus football stadiums replaced with datacenters and gas turbines
    • root_axis 9 minutes ago
      Way too expensive.
    • digitaltrees 22 minutes ago
      Oh. Thats an interesting concept. Expand
  • nl 3 hours ago
    Quote:

    > “The two AI superpowers are going to start talking. We’re going to set up a protocol in terms of how do we go forward with best practices for AI to make sure nonstate actors don’t get a hold of these models,” Bessent told Joe Kernen on Thursday, on the sidelines of President Donald Trump’s two-day meeting in Beijing with Chinese President Xi Jinping.

    https://www.cnbc.com/2026/05/14/us-china-ai-rules-bessent-us...

    OpenAI is already talking openly about gated access to their models (see this OpenAI podcast episode for example: https://openai.com/podcast/#oai-podcast-episode-16)

    Separately there's also a very active effort to stop open weight releases.

    It's dangerous to those who think access to frontier intelligence is important.

  • partloyaldemon 2 hours ago
    All the downsides of your cliched agi nightmares but with the “intelligence” of your bog standard national security functionary
  • wewewedxfgdf 1 hour ago
    Its worse than that - all AI features will get broken down into even finer slices and you will have to pay for everything based on the finest level of slice they can make and still make money.
  • chvid 1 hour ago
    DeepSeek is not a distillation of Claude or ChatGPT - stating this is just idiotic politics at this point.

    The Chinese labs have reached "escape velocity" long ago - they will continue development regardless of API access to US models or the willingness of US labs to share their research.

  • ares623 2 hours ago
    I wonder if the countries that don't have "AI Sovereignty" end up being like what Japan is now, technologically. It's stuck in 90's/early 2000's tech and norms (i.e. left behind) but its infrastructure and society chugs along (the demographic problem is a separate issue).

    Would that make those countries more attractive to young people perhaps? As a place to grow and learn skills where the opportunities are non-existent in the AI Sovereign countries.

  • shevy-java 2 hours ago
    So now AI is about apartheid. I am not liking this at all.
    • 9dev 42 minutes ago
      Stop using this word so lightheartedly. You have no clue what you are talking about, and ridicule millions of people who have suffered through decades of oppression.
  • zelon88 3 hours ago
    > And it doesn’t stop with the security questions: the Trump administration’s signature style of international engagement is to wield American leverage as a bundle. Deadlocks in trade negotiations are broken by threatening to withhold intelligence, tech deals are stalled by reference to food safety standards. And so I don’t know when a U.S. administration would choose to leverage its seemingly inevitable predeployment authority over frontier models to secure its broader interests, but I’m sure it would in due time. That means that even if we do everything ‘right’ on the security and economic side, frontier access is still fundamentally contingent as long as there’ll be divergences between governments’ strategic interests.

    The Trump Administration telling the very neo-fascist oligarchs who bought him an election and bought him a ballroom to play nice with their toys? At the expense of rampant capitalism? Lol.

    He already showed us the limit of his comprehension of the topic when he made EO 14179 limiting states from regulating AI.

    Trump doesn't swing for perfect pitches. He is a madman, a lunatic, and a true moron. Do not give this man any credit. I would be shocked if he could tell you the time on an analog clock.

    • thesmtsolver2 2 hours ago
      > very neo-fascist oligarchs who bought him an election

      Gotta low how it is ok to question the results of the latest presidential election but not the earlier one which is supposed to be sacrosanct, but again ok to question the one in 2016.

      Somehow Trump is owned by capitalists but also starts trade/actual wars that thwart their agenda. I never understood how people come up with simplistic reductionistic views full of inconsistencies. Won't the evil capitalists and Neo fascists be served better with a predictable/controllable president?

      • partloyaldemon 2 hours ago
        You can be a greedy pig and be an idiot simultaneously. You can see how those two things might even be correlated, no?
      • mmasu 2 hours ago
        I think “bought” here is to be read as “financed from”, not bought in the literal sense.
      • Terr_ 29 minutes ago
        > to question

        That's a weird way to characterize months of incessant "we have incontrovertible hard evidence but you can't see it yet" claims, which--when finally forced into the light--were laughed out of every court in the nation.

        If it was just pure and innocent "questioning", things would be very different. We probably wouldn't have had the January 6th mob attack on Congress, for example.

      • hjkl0 1 hour ago
        Trump's second presidency is the best possible evidence that no one is driving the world in secret from behind the scenes.
      • viking123 1 hour ago
        I think we all know by now who he is really owned by.
  • jdw64 14 minutes ago
    [dead]
  • dobreandl 58 minutes ago
    [flagged]
  • eth0up 3 hours ago
    Damn. I predicted this last year and got thrashed for it.

    Glad to see others catching on.

  • viking123 1 hour ago
    If Amodei and the co. were in charge the models would alert the police if someone said "boob" and the goys would only get GPT 2 level models, hell, even that might be too dangerous.
    • petesergeant 1 hour ago
      > goys

      I suspect this was just a throwaway word usage, but its usage here ends up being pretty anti-Semitic, so probably worth reconsidering its use if that wasn’t the intention of your post.

      • nubg 1 hour ago
        What part of what he said was false? Dario Amodei and especially Sam Altman have been treating the general public like cattle. And goy simply means non-Jew, how can not talking about Jews be anti-Semitic?!
        • petesergeant 1 hour ago
          Using “goy” as a stand-in for — say — “plebs” is the anti-Semitic piece, because it implies that Jews treat non-Jews as lesser, discriminate against them, and that’s it’s common enough as to be standard usage. I am willing to believe op doesn’t know Amodei is Jewish, hadn’t thought through the implications (even if Amodei wasn’t), and didn’t mean anything by it, but it’s not just a harmless phrase for “commoners”