This article is from 29 October 2025. The author seems to be using the term "AI" to mean many different things. Back in 2006, Martin Fowler called this Semantic Diffusion: https://martinfowler.com/bliki/SemanticDiffusion.html
I'm trying to understand why he's so angry, but I can't. If he's so passionate, why not take the time and make a cohesive argument instead of jumping from point to point in an unstructured way?
This is something that happens when people feel threatened, actually, but he has a lot of credentials, and reading them makes me convinced that he shouldn't feel threatened by AI, at least not on this level.
I agree and disagree with different parts of this article. I've read/seen a lot of the source documentation so I think there's plenty of hyperbole, even if there's a nugget of truth.
Because the “The AI Bubble Is 17 Times the Size of the Dot-Com Frenzy — and
Four Times the Subprime Bubble” (oh, and also, there is also a new subprime
bubble—and it’s already collapsing, which will make all of this worse).
It's not 17 times the size of the Dot-Com, certainly not scaled with the total market valuation. Much of the money that has been "promised" has not be delivered and there is a LOT of circularity to the funding, but it's not $Trillons yet. Many of the companies are still private, haven't gone up much, or are only fractionally floated so the numbers look big. But, it doesn't look like there's a huge moat, and it's going to be expensive to pay for those training servers with inference. The depreciation is all wrong, they're not in the right places, and power consumption isn't optimized. TSMC is probably gonna sell more chips to make it so.
At the same time, it's great to be a user of a "free" product. LLMs work as well as Google search used to! It's great! You can't believe everything you read on the internet, but if you know enough to verify it, it's incredibly useful. If you build an OpenClaw footgun, you deserve the consequences, even if your other victims probably don't. Will the "AI" companies end up paying for it all by exfiltrating their "customers' data? Facebook and Google did.
> AI tools are currently being sold way below cost to get into the market. That is unsustainable.
This is wrong on so many levels. Most companies (including our agency) bet on self hosted "free" LLMs to solve every day problems. The only costs we have is hardware and electricity. And that won't increase on a high scale.
Furthermore, AI is already driving productivity measureable in our company. What will probably happen is that software prices will drop big times but this is fine and predictable. Still engineers are needed to solve business processes, discuss with customers and find solutions for their problems.
Completely asinine take. Blanket statements like AI is a scam and AI is not useful and AI is not actually AI at all... what a complete joke. We're using it every single day to accomplish tasks successfully at a scale and speed that wasn't possible even a few years ago. Fact. It's happening every hour of every day with more people figuring it out all the time. Richard Carrier is a contrarian lunatic, and he's applying his same flawed methodology of conclusion first, evidence never to the greatest technological revolution in human history.
This article is a crazy sounding and very unsystematic rant. It sounds as if some AI hurt this author personally somehow. What I find most annoying about it is that the author constantly mixes up consequences of using AI with perceived or real capabilities of LLMs.
Judge on your own if you have the time. I can't recommend the reading.
I think the problem is there are so many different aspects of this thing we call AI that it's hard to pin down any particular use case. For some users it's brilliant because if you're doing something like marketing imagery etc it can dramatically reduce costs, especially if you're using on premise models on your own hardware without touching the cloud.
But for other uses, i.e. companies who've just thrown AI money at the wall, probably using chatGPT, they wonder why they're not getting the return on investment they were promised. It's all a bit confused at the moment. Rather like the beginning of the internet days were.
This is something that happens when people feel threatened, actually, but he has a lot of credentials, and reading them makes me convinced that he shouldn't feel threatened by AI, at least not on this level.
Very weird.
At the same time, it's great to be a user of a "free" product. LLMs work as well as Google search used to! It's great! You can't believe everything you read on the internet, but if you know enough to verify it, it's incredibly useful. If you build an OpenClaw footgun, you deserve the consequences, even if your other victims probably don't. Will the "AI" companies end up paying for it all by exfiltrating their "customers' data? Facebook and Google did.
This is wrong on so many levels. Most companies (including our agency) bet on self hosted "free" LLMs to solve every day problems. The only costs we have is hardware and electricity. And that won't increase on a high scale. Furthermore, AI is already driving productivity measureable in our company. What will probably happen is that software prices will drop big times but this is fine and predictable. Still engineers are needed to solve business processes, discuss with customers and find solutions for their problems.
Judge on your own if you have the time. I can't recommend the reading.
But for other uses, i.e. companies who've just thrown AI money at the wall, probably using chatGPT, they wonder why they're not getting the return on investment they were promised. It's all a bit confused at the moment. Rather like the beginning of the internet days were.